00:00:00.001 Started by upstream project "autotest-per-patch" build number 132086 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.043 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.044 The recommended git tool is: git 00:00:00.044 using credential 00000000-0000-0000-0000-000000000002 00:00:00.045 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.081 Fetching changes from the remote Git repository 00:00:00.083 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.121 Using shallow fetch with depth 1 00:00:00.121 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.121 > git --version # timeout=10 00:00:00.152 > git --version # 'git version 2.39.2' 00:00:00.152 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.168 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.168 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.467 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.482 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.497 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.497 > git config core.sparsecheckout # timeout=10 00:00:05.507 > git read-tree -mu HEAD # timeout=10 00:00:05.524 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.543 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.543 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.641 [Pipeline] Start of Pipeline 00:00:05.656 [Pipeline] library 00:00:05.657 Loading library shm_lib@master 00:00:05.658 Library shm_lib@master is cached. Copying from home. 00:00:05.674 [Pipeline] node 00:00:05.683 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.684 [Pipeline] { 00:00:05.692 [Pipeline] catchError 00:00:05.693 [Pipeline] { 00:00:05.701 [Pipeline] wrap 00:00:05.707 [Pipeline] { 00:00:05.712 [Pipeline] stage 00:00:05.713 [Pipeline] { (Prologue) 00:00:05.920 [Pipeline] sh 00:00:06.204 + logger -p user.info -t JENKINS-CI 00:00:06.228 [Pipeline] echo 00:00:06.230 Node: GP6 00:00:06.238 [Pipeline] sh 00:00:06.545 [Pipeline] setCustomBuildProperty 00:00:06.554 [Pipeline] echo 00:00:06.555 Cleanup processes 00:00:06.559 [Pipeline] sh 00:00:06.843 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.843 627212 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.857 [Pipeline] sh 00:00:07.145 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.145 ++ grep -v 'sudo pgrep' 00:00:07.145 ++ awk '{print $1}' 00:00:07.145 + sudo kill -9 00:00:07.145 + true 00:00:07.160 [Pipeline] cleanWs 00:00:07.170 [WS-CLEANUP] Deleting project workspace... 00:00:07.170 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.177 [WS-CLEANUP] done 00:00:07.180 [Pipeline] setCustomBuildProperty 00:00:07.189 [Pipeline] sh 00:00:07.471 + sudo git config --global --replace-all safe.directory '*' 00:00:07.557 [Pipeline] httpRequest 00:00:07.976 [Pipeline] echo 00:00:07.977 Sorcerer 10.211.164.101 is alive 00:00:07.987 [Pipeline] retry 00:00:07.989 [Pipeline] { 00:00:08.001 [Pipeline] httpRequest 00:00:08.005 HttpMethod: GET 00:00:08.006 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.007 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.013 Response Code: HTTP/1.1 200 OK 00:00:08.014 Success: Status code 200 is in the accepted range: 200,404 00:00:08.014 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:22.902 [Pipeline] } 00:00:22.921 [Pipeline] // retry 00:00:22.928 [Pipeline] sh 00:00:23.213 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:23.231 [Pipeline] httpRequest 00:00:23.661 [Pipeline] echo 00:00:23.663 Sorcerer 10.211.164.101 is alive 00:00:23.672 [Pipeline] retry 00:00:23.674 [Pipeline] { 00:00:23.687 [Pipeline] httpRequest 00:00:23.691 HttpMethod: GET 00:00:23.692 URL: http://10.211.164.101/packages/spdk_481542548e9a1a582482d45933c41aa928fbc68c.tar.gz 00:00:23.693 Sending request to url: http://10.211.164.101/packages/spdk_481542548e9a1a582482d45933c41aa928fbc68c.tar.gz 00:00:23.700 Response Code: HTTP/1.1 200 OK 00:00:23.700 Success: Status code 200 is in the accepted range: 200,404 00:00:23.701 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_481542548e9a1a582482d45933c41aa928fbc68c.tar.gz 00:04:23.919 [Pipeline] } 00:04:23.937 [Pipeline] // retry 00:04:23.944 [Pipeline] sh 00:04:24.239 + tar --no-same-owner -xf spdk_481542548e9a1a582482d45933c41aa928fbc68c.tar.gz 00:04:26.789 [Pipeline] sh 00:04:27.077 + git -C spdk log --oneline -n5 00:04:27.077 481542548 accel: Add spdk_accel_sequence_has_task() to query what sequence does 00:04:27.077 a4d8602f2 nvmf: Add no_metadata option to nvmf_subsystem_add_ns 00:04:27.077 15b283ee8 nvmf: Get metadata config by not bdev but bdev_desc 00:04:27.077 cec609db6 bdevperf: Add no_metadata option 00:04:27.077 39e719aa5 bdevperf: Get metadata config by not bdev but bdev_desc 00:04:27.089 [Pipeline] } 00:04:27.102 [Pipeline] // stage 00:04:27.112 [Pipeline] stage 00:04:27.114 [Pipeline] { (Prepare) 00:04:27.131 [Pipeline] writeFile 00:04:27.147 [Pipeline] sh 00:04:27.438 + logger -p user.info -t JENKINS-CI 00:04:27.453 [Pipeline] sh 00:04:27.743 + logger -p user.info -t JENKINS-CI 00:04:27.755 [Pipeline] sh 00:04:28.043 + cat autorun-spdk.conf 00:04:28.043 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:28.043 SPDK_TEST_NVMF=1 00:04:28.043 SPDK_TEST_NVME_CLI=1 00:04:28.043 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:28.043 SPDK_TEST_NVMF_NICS=e810 00:04:28.043 SPDK_TEST_VFIOUSER=1 00:04:28.043 SPDK_RUN_UBSAN=1 00:04:28.043 NET_TYPE=phy 00:04:28.052 RUN_NIGHTLY=0 00:04:28.056 [Pipeline] readFile 00:04:28.080 [Pipeline] withEnv 00:04:28.081 [Pipeline] { 00:04:28.093 [Pipeline] sh 00:04:28.384 + set -ex 00:04:28.384 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:28.384 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:28.384 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:28.384 ++ SPDK_TEST_NVMF=1 00:04:28.384 ++ SPDK_TEST_NVME_CLI=1 00:04:28.384 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:28.384 ++ SPDK_TEST_NVMF_NICS=e810 00:04:28.384 ++ SPDK_TEST_VFIOUSER=1 00:04:28.384 ++ SPDK_RUN_UBSAN=1 00:04:28.384 ++ NET_TYPE=phy 00:04:28.384 ++ RUN_NIGHTLY=0 00:04:28.384 + case $SPDK_TEST_NVMF_NICS in 00:04:28.384 + DRIVERS=ice 00:04:28.384 + [[ tcp == \r\d\m\a ]] 00:04:28.384 + [[ -n ice ]] 00:04:28.384 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:28.384 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:28.384 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:28.384 rmmod: ERROR: Module irdma is not currently loaded 00:04:28.384 rmmod: ERROR: Module i40iw is not currently loaded 00:04:28.384 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:28.384 + true 00:04:28.384 + for D in $DRIVERS 00:04:28.384 + sudo modprobe ice 00:04:28.384 + exit 0 00:04:28.394 [Pipeline] } 00:04:28.409 [Pipeline] // withEnv 00:04:28.414 [Pipeline] } 00:04:28.428 [Pipeline] // stage 00:04:28.436 [Pipeline] catchError 00:04:28.437 [Pipeline] { 00:04:28.449 [Pipeline] timeout 00:04:28.449 Timeout set to expire in 1 hr 0 min 00:04:28.451 [Pipeline] { 00:04:28.464 [Pipeline] stage 00:04:28.466 [Pipeline] { (Tests) 00:04:28.480 [Pipeline] sh 00:04:28.775 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:28.775 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:28.775 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:28.775 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:28.775 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:28.775 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:28.775 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:28.775 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:28.775 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:28.775 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:28.775 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:28.775 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:28.775 + source /etc/os-release 00:04:28.775 ++ NAME='Fedora Linux' 00:04:28.775 ++ VERSION='39 (Cloud Edition)' 00:04:28.775 ++ ID=fedora 00:04:28.775 ++ VERSION_ID=39 00:04:28.775 ++ VERSION_CODENAME= 00:04:28.775 ++ PLATFORM_ID=platform:f39 00:04:28.776 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:28.776 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:28.776 ++ LOGO=fedora-logo-icon 00:04:28.776 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:28.776 ++ HOME_URL=https://fedoraproject.org/ 00:04:28.776 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:28.776 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:28.776 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:28.776 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:28.776 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:28.776 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:28.776 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:28.776 ++ SUPPORT_END=2024-11-12 00:04:28.776 ++ VARIANT='Cloud Edition' 00:04:28.776 ++ VARIANT_ID=cloud 00:04:28.776 + uname -a 00:04:28.776 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:28.776 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:29.715 Hugepages 00:04:29.715 node hugesize free / total 00:04:29.716 node0 1048576kB 0 / 0 00:04:29.716 node0 2048kB 0 / 0 00:04:29.716 node1 1048576kB 0 / 0 00:04:29.716 node1 2048kB 0 / 0 00:04:29.716 00:04:29.716 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:29.716 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:29.716 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:29.716 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:29.716 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:29.716 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:29.716 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:29.716 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:29.975 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:29.975 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:29.975 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:29.975 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:29.975 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:29.975 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:29.975 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:29.975 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:29.975 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:29.975 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:29.975 + rm -f /tmp/spdk-ld-path 00:04:29.975 + source autorun-spdk.conf 00:04:29.975 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:29.975 ++ SPDK_TEST_NVMF=1 00:04:29.975 ++ SPDK_TEST_NVME_CLI=1 00:04:29.975 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:29.975 ++ SPDK_TEST_NVMF_NICS=e810 00:04:29.975 ++ SPDK_TEST_VFIOUSER=1 00:04:29.975 ++ SPDK_RUN_UBSAN=1 00:04:29.975 ++ NET_TYPE=phy 00:04:29.975 ++ RUN_NIGHTLY=0 00:04:29.975 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:29.975 + [[ -n '' ]] 00:04:29.975 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:29.975 + for M in /var/spdk/build-*-manifest.txt 00:04:29.975 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:29.975 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:29.975 + for M in /var/spdk/build-*-manifest.txt 00:04:29.975 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:29.975 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:29.975 + for M in /var/spdk/build-*-manifest.txt 00:04:29.975 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:29.975 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:29.975 ++ uname 00:04:29.975 + [[ Linux == \L\i\n\u\x ]] 00:04:29.975 + sudo dmesg -T 00:04:29.975 + sudo dmesg --clear 00:04:29.975 + dmesg_pid=628532 00:04:29.975 + [[ Fedora Linux == FreeBSD ]] 00:04:29.975 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:29.975 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:29.975 + sudo dmesg -Tw 00:04:29.975 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:29.975 + [[ -x /usr/src/fio-static/fio ]] 00:04:29.975 + export FIO_BIN=/usr/src/fio-static/fio 00:04:29.975 + FIO_BIN=/usr/src/fio-static/fio 00:04:29.975 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:29.975 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:29.975 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:29.975 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:29.975 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:29.975 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:29.975 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:29.975 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:29.975 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:29.975 Test configuration: 00:04:29.975 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:29.975 SPDK_TEST_NVMF=1 00:04:29.975 SPDK_TEST_NVME_CLI=1 00:04:29.975 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:29.975 SPDK_TEST_NVMF_NICS=e810 00:04:29.975 SPDK_TEST_VFIOUSER=1 00:04:29.975 SPDK_RUN_UBSAN=1 00:04:29.975 NET_TYPE=phy 00:04:29.975 RUN_NIGHTLY=0 08:40:43 -- common/autotest_common.sh@1688 -- $ [[ n == y ]] 00:04:29.975 08:40:43 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:29.975 08:40:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:29.975 08:40:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:29.975 08:40:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.975 08:40:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.975 08:40:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.975 08:40:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.975 08:40:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.975 08:40:43 -- paths/export.sh@5 -- $ export PATH 00:04:29.975 08:40:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.975 08:40:43 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:29.975 08:40:43 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:29.975 08:40:43 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730878843.XXXXXX 00:04:29.975 08:40:43 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730878843.Q548vV 00:04:29.975 08:40:43 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:29.975 08:40:43 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:29.976 08:40:43 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:29.976 08:40:43 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:29.976 08:40:43 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:29.976 08:40:43 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:29.976 08:40:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:29.976 08:40:43 -- common/autotest_common.sh@10 -- $ set +x 00:04:29.976 08:40:43 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:29.976 08:40:43 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:29.976 08:40:43 -- pm/common@17 -- $ local monitor 00:04:29.976 08:40:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.976 08:40:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.976 08:40:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.976 08:40:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.976 08:40:43 -- pm/common@21 -- $ date +%s 00:04:29.976 08:40:43 -- pm/common@25 -- $ sleep 1 00:04:29.976 08:40:43 -- pm/common@21 -- $ date +%s 00:04:29.976 08:40:43 -- pm/common@21 -- $ date +%s 00:04:29.976 08:40:43 -- pm/common@21 -- $ date +%s 00:04:29.976 08:40:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730878843 00:04:29.976 08:40:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730878843 00:04:29.976 08:40:43 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730878843 00:04:29.976 08:40:43 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730878843 00:04:30.236 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730878843_collect-vmstat.pm.log 00:04:30.236 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730878843_collect-cpu-load.pm.log 00:04:30.236 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730878843_collect-cpu-temp.pm.log 00:04:30.236 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730878843_collect-bmc-pm.bmc.pm.log 00:04:31.175 08:40:44 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:04:31.175 08:40:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:31.175 08:40:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:31.175 08:40:44 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.175 08:40:44 -- spdk/autobuild.sh@16 -- $ date -u 00:04:31.175 Wed Nov 6 07:40:44 AM UTC 2024 00:04:31.175 08:40:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:31.175 v25.01-pre-142-g481542548 00:04:31.175 08:40:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:31.175 08:40:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:31.175 08:40:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:31.175 08:40:44 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:31.175 08:40:44 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:31.175 08:40:44 -- common/autotest_common.sh@10 -- $ set +x 00:04:31.175 ************************************ 00:04:31.175 START TEST ubsan 00:04:31.175 ************************************ 00:04:31.175 08:40:44 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:04:31.175 using ubsan 00:04:31.175 00:04:31.175 real 0m0.000s 00:04:31.175 user 0m0.000s 00:04:31.175 sys 0m0.000s 00:04:31.175 08:40:44 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:31.175 08:40:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:31.175 ************************************ 00:04:31.175 END TEST ubsan 00:04:31.175 ************************************ 00:04:31.175 08:40:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:31.175 08:40:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:31.175 08:40:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:31.175 08:40:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:31.175 08:40:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:31.175 08:40:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:31.175 08:40:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:31.175 08:40:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:31.175 08:40:44 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:31.175 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:31.175 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:31.435 Using 'verbs' RDMA provider 00:04:42.000 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:52.093 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:52.093 Creating mk/config.mk...done. 00:04:52.093 Creating mk/cc.flags.mk...done. 00:04:52.093 Type 'make' to build. 00:04:52.093 08:41:05 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:04:52.093 08:41:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:52.093 08:41:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:52.093 08:41:05 -- common/autotest_common.sh@10 -- $ set +x 00:04:52.093 ************************************ 00:04:52.093 START TEST make 00:04:52.093 ************************************ 00:04:52.093 08:41:05 make -- common/autotest_common.sh@1125 -- $ make -j48 00:04:52.356 make[1]: Nothing to be done for 'all'. 00:04:54.280 The Meson build system 00:04:54.280 Version: 1.5.0 00:04:54.280 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:54.280 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:54.280 Build type: native build 00:04:54.280 Project name: libvfio-user 00:04:54.280 Project version: 0.0.1 00:04:54.280 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:54.280 C linker for the host machine: cc ld.bfd 2.40-14 00:04:54.280 Host machine cpu family: x86_64 00:04:54.280 Host machine cpu: x86_64 00:04:54.280 Run-time dependency threads found: YES 00:04:54.280 Library dl found: YES 00:04:54.280 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:54.280 Run-time dependency json-c found: YES 0.17 00:04:54.280 Run-time dependency cmocka found: YES 1.1.7 00:04:54.280 Program pytest-3 found: NO 00:04:54.280 Program flake8 found: NO 00:04:54.280 Program misspell-fixer found: NO 00:04:54.280 Program restructuredtext-lint found: NO 00:04:54.280 Program valgrind found: YES (/usr/bin/valgrind) 00:04:54.280 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:54.280 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:54.280 Compiler for C supports arguments -Wwrite-strings: YES 00:04:54.280 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:54.280 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:54.280 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:54.280 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:54.280 Build targets in project: 8 00:04:54.280 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:54.280 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:54.280 00:04:54.280 libvfio-user 0.0.1 00:04:54.280 00:04:54.280 User defined options 00:04:54.280 buildtype : debug 00:04:54.280 default_library: shared 00:04:54.280 libdir : /usr/local/lib 00:04:54.280 00:04:54.280 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:54.866 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:55.130 [1/37] Compiling C object samples/null.p/null.c.o 00:04:55.130 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:55.131 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:55.131 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:55.131 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:55.131 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:55.131 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:55.131 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:55.131 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:55.131 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:55.131 [11/37] Compiling C object samples/server.p/server.c.o 00:04:55.131 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:55.131 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:55.131 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:55.391 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:55.391 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:55.391 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:55.391 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:55.391 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:55.391 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:55.391 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:55.391 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:55.391 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:55.391 [24/37] Compiling C object samples/client.p/client.c.o 00:04:55.391 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:55.391 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:55.391 [27/37] Linking target samples/client 00:04:55.391 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:55.391 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:55.654 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:55.654 [31/37] Linking target test/unit_tests 00:04:55.654 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:55.654 [33/37] Linking target samples/null 00:04:55.654 [34/37] Linking target samples/server 00:04:55.654 [35/37] Linking target samples/lspci 00:04:55.654 [36/37] Linking target samples/gpio-pci-idio-16 00:04:55.654 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:55.654 INFO: autodetecting backend as ninja 00:04:55.654 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:55.917 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:56.857 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:56.857 ninja: no work to do. 00:05:02.132 The Meson build system 00:05:02.132 Version: 1.5.0 00:05:02.132 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:02.132 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:02.132 Build type: native build 00:05:02.132 Program cat found: YES (/usr/bin/cat) 00:05:02.132 Project name: DPDK 00:05:02.132 Project version: 24.03.0 00:05:02.132 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:02.132 C linker for the host machine: cc ld.bfd 2.40-14 00:05:02.132 Host machine cpu family: x86_64 00:05:02.132 Host machine cpu: x86_64 00:05:02.132 Message: ## Building in Developer Mode ## 00:05:02.132 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:02.132 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:02.132 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:02.132 Program python3 found: YES (/usr/bin/python3) 00:05:02.132 Program cat found: YES (/usr/bin/cat) 00:05:02.132 Compiler for C supports arguments -march=native: YES 00:05:02.132 Checking for size of "void *" : 8 00:05:02.132 Checking for size of "void *" : 8 (cached) 00:05:02.132 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:02.132 Library m found: YES 00:05:02.132 Library numa found: YES 00:05:02.132 Has header "numaif.h" : YES 00:05:02.132 Library fdt found: NO 00:05:02.132 Library execinfo found: NO 00:05:02.132 Has header "execinfo.h" : YES 00:05:02.132 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:02.132 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:02.132 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:02.132 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:02.132 Run-time dependency openssl found: YES 3.1.1 00:05:02.132 Run-time dependency libpcap found: YES 1.10.4 00:05:02.132 Has header "pcap.h" with dependency libpcap: YES 00:05:02.132 Compiler for C supports arguments -Wcast-qual: YES 00:05:02.132 Compiler for C supports arguments -Wdeprecated: YES 00:05:02.132 Compiler for C supports arguments -Wformat: YES 00:05:02.132 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:02.132 Compiler for C supports arguments -Wformat-security: NO 00:05:02.132 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:02.132 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:02.132 Compiler for C supports arguments -Wnested-externs: YES 00:05:02.132 Compiler for C supports arguments -Wold-style-definition: YES 00:05:02.132 Compiler for C supports arguments -Wpointer-arith: YES 00:05:02.132 Compiler for C supports arguments -Wsign-compare: YES 00:05:02.132 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:02.132 Compiler for C supports arguments -Wundef: YES 00:05:02.132 Compiler for C supports arguments -Wwrite-strings: YES 00:05:02.132 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:02.132 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:02.132 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:02.132 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:02.132 Program objdump found: YES (/usr/bin/objdump) 00:05:02.132 Compiler for C supports arguments -mavx512f: YES 00:05:02.132 Checking if "AVX512 checking" compiles: YES 00:05:02.132 Fetching value of define "__SSE4_2__" : 1 00:05:02.132 Fetching value of define "__AES__" : 1 00:05:02.132 Fetching value of define "__AVX__" : 1 00:05:02.132 Fetching value of define "__AVX2__" : (undefined) 00:05:02.133 Fetching value of define "__AVX512BW__" : (undefined) 00:05:02.133 Fetching value of define "__AVX512CD__" : (undefined) 00:05:02.133 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:02.133 Fetching value of define "__AVX512F__" : (undefined) 00:05:02.133 Fetching value of define "__AVX512VL__" : (undefined) 00:05:02.133 Fetching value of define "__PCLMUL__" : 1 00:05:02.133 Fetching value of define "__RDRND__" : 1 00:05:02.133 Fetching value of define "__RDSEED__" : (undefined) 00:05:02.133 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:02.133 Fetching value of define "__znver1__" : (undefined) 00:05:02.133 Fetching value of define "__znver2__" : (undefined) 00:05:02.133 Fetching value of define "__znver3__" : (undefined) 00:05:02.133 Fetching value of define "__znver4__" : (undefined) 00:05:02.133 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:02.133 Message: lib/log: Defining dependency "log" 00:05:02.133 Message: lib/kvargs: Defining dependency "kvargs" 00:05:02.133 Message: lib/telemetry: Defining dependency "telemetry" 00:05:02.133 Checking for function "getentropy" : NO 00:05:02.133 Message: lib/eal: Defining dependency "eal" 00:05:02.133 Message: lib/ring: Defining dependency "ring" 00:05:02.133 Message: lib/rcu: Defining dependency "rcu" 00:05:02.133 Message: lib/mempool: Defining dependency "mempool" 00:05:02.133 Message: lib/mbuf: Defining dependency "mbuf" 00:05:02.133 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:02.133 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:02.133 Compiler for C supports arguments -mpclmul: YES 00:05:02.133 Compiler for C supports arguments -maes: YES 00:05:02.133 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:02.133 Compiler for C supports arguments -mavx512bw: YES 00:05:02.133 Compiler for C supports arguments -mavx512dq: YES 00:05:02.133 Compiler for C supports arguments -mavx512vl: YES 00:05:02.133 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:02.133 Compiler for C supports arguments -mavx2: YES 00:05:02.133 Compiler for C supports arguments -mavx: YES 00:05:02.133 Message: lib/net: Defining dependency "net" 00:05:02.133 Message: lib/meter: Defining dependency "meter" 00:05:02.133 Message: lib/ethdev: Defining dependency "ethdev" 00:05:02.133 Message: lib/pci: Defining dependency "pci" 00:05:02.133 Message: lib/cmdline: Defining dependency "cmdline" 00:05:02.133 Message: lib/hash: Defining dependency "hash" 00:05:02.133 Message: lib/timer: Defining dependency "timer" 00:05:02.133 Message: lib/compressdev: Defining dependency "compressdev" 00:05:02.133 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:02.133 Message: lib/dmadev: Defining dependency "dmadev" 00:05:02.133 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:02.133 Message: lib/power: Defining dependency "power" 00:05:02.133 Message: lib/reorder: Defining dependency "reorder" 00:05:02.133 Message: lib/security: Defining dependency "security" 00:05:02.133 Has header "linux/userfaultfd.h" : YES 00:05:02.133 Has header "linux/vduse.h" : YES 00:05:02.133 Message: lib/vhost: Defining dependency "vhost" 00:05:02.133 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:02.133 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:02.133 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:02.133 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:02.133 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:02.133 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:02.133 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:02.133 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:02.133 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:02.133 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:02.133 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:02.133 Configuring doxy-api-html.conf using configuration 00:05:02.133 Configuring doxy-api-man.conf using configuration 00:05:02.133 Program mandb found: YES (/usr/bin/mandb) 00:05:02.133 Program sphinx-build found: NO 00:05:02.133 Configuring rte_build_config.h using configuration 00:05:02.133 Message: 00:05:02.133 ================= 00:05:02.133 Applications Enabled 00:05:02.133 ================= 00:05:02.133 00:05:02.133 apps: 00:05:02.133 00:05:02.133 00:05:02.133 Message: 00:05:02.133 ================= 00:05:02.133 Libraries Enabled 00:05:02.133 ================= 00:05:02.133 00:05:02.133 libs: 00:05:02.133 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:02.133 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:02.133 cryptodev, dmadev, power, reorder, security, vhost, 00:05:02.133 00:05:02.133 Message: 00:05:02.133 =============== 00:05:02.133 Drivers Enabled 00:05:02.133 =============== 00:05:02.133 00:05:02.133 common: 00:05:02.133 00:05:02.133 bus: 00:05:02.133 pci, vdev, 00:05:02.133 mempool: 00:05:02.133 ring, 00:05:02.133 dma: 00:05:02.133 00:05:02.133 net: 00:05:02.133 00:05:02.133 crypto: 00:05:02.133 00:05:02.133 compress: 00:05:02.133 00:05:02.133 vdpa: 00:05:02.133 00:05:02.133 00:05:02.133 Message: 00:05:02.133 ================= 00:05:02.133 Content Skipped 00:05:02.133 ================= 00:05:02.133 00:05:02.133 apps: 00:05:02.133 dumpcap: explicitly disabled via build config 00:05:02.133 graph: explicitly disabled via build config 00:05:02.133 pdump: explicitly disabled via build config 00:05:02.133 proc-info: explicitly disabled via build config 00:05:02.133 test-acl: explicitly disabled via build config 00:05:02.133 test-bbdev: explicitly disabled via build config 00:05:02.133 test-cmdline: explicitly disabled via build config 00:05:02.133 test-compress-perf: explicitly disabled via build config 00:05:02.133 test-crypto-perf: explicitly disabled via build config 00:05:02.133 test-dma-perf: explicitly disabled via build config 00:05:02.133 test-eventdev: explicitly disabled via build config 00:05:02.133 test-fib: explicitly disabled via build config 00:05:02.133 test-flow-perf: explicitly disabled via build config 00:05:02.133 test-gpudev: explicitly disabled via build config 00:05:02.133 test-mldev: explicitly disabled via build config 00:05:02.133 test-pipeline: explicitly disabled via build config 00:05:02.133 test-pmd: explicitly disabled via build config 00:05:02.133 test-regex: explicitly disabled via build config 00:05:02.133 test-sad: explicitly disabled via build config 00:05:02.133 test-security-perf: explicitly disabled via build config 00:05:02.133 00:05:02.133 libs: 00:05:02.133 argparse: explicitly disabled via build config 00:05:02.133 metrics: explicitly disabled via build config 00:05:02.133 acl: explicitly disabled via build config 00:05:02.133 bbdev: explicitly disabled via build config 00:05:02.133 bitratestats: explicitly disabled via build config 00:05:02.133 bpf: explicitly disabled via build config 00:05:02.133 cfgfile: explicitly disabled via build config 00:05:02.133 distributor: explicitly disabled via build config 00:05:02.133 efd: explicitly disabled via build config 00:05:02.133 eventdev: explicitly disabled via build config 00:05:02.133 dispatcher: explicitly disabled via build config 00:05:02.133 gpudev: explicitly disabled via build config 00:05:02.133 gro: explicitly disabled via build config 00:05:02.133 gso: explicitly disabled via build config 00:05:02.133 ip_frag: explicitly disabled via build config 00:05:02.133 jobstats: explicitly disabled via build config 00:05:02.133 latencystats: explicitly disabled via build config 00:05:02.133 lpm: explicitly disabled via build config 00:05:02.133 member: explicitly disabled via build config 00:05:02.133 pcapng: explicitly disabled via build config 00:05:02.133 rawdev: explicitly disabled via build config 00:05:02.133 regexdev: explicitly disabled via build config 00:05:02.133 mldev: explicitly disabled via build config 00:05:02.133 rib: explicitly disabled via build config 00:05:02.133 sched: explicitly disabled via build config 00:05:02.133 stack: explicitly disabled via build config 00:05:02.133 ipsec: explicitly disabled via build config 00:05:02.133 pdcp: explicitly disabled via build config 00:05:02.133 fib: explicitly disabled via build config 00:05:02.133 port: explicitly disabled via build config 00:05:02.133 pdump: explicitly disabled via build config 00:05:02.133 table: explicitly disabled via build config 00:05:02.133 pipeline: explicitly disabled via build config 00:05:02.134 graph: explicitly disabled via build config 00:05:02.134 node: explicitly disabled via build config 00:05:02.134 00:05:02.134 drivers: 00:05:02.134 common/cpt: not in enabled drivers build config 00:05:02.134 common/dpaax: not in enabled drivers build config 00:05:02.134 common/iavf: not in enabled drivers build config 00:05:02.134 common/idpf: not in enabled drivers build config 00:05:02.134 common/ionic: not in enabled drivers build config 00:05:02.134 common/mvep: not in enabled drivers build config 00:05:02.134 common/octeontx: not in enabled drivers build config 00:05:02.134 bus/auxiliary: not in enabled drivers build config 00:05:02.134 bus/cdx: not in enabled drivers build config 00:05:02.134 bus/dpaa: not in enabled drivers build config 00:05:02.134 bus/fslmc: not in enabled drivers build config 00:05:02.134 bus/ifpga: not in enabled drivers build config 00:05:02.134 bus/platform: not in enabled drivers build config 00:05:02.134 bus/uacce: not in enabled drivers build config 00:05:02.134 bus/vmbus: not in enabled drivers build config 00:05:02.134 common/cnxk: not in enabled drivers build config 00:05:02.134 common/mlx5: not in enabled drivers build config 00:05:02.134 common/nfp: not in enabled drivers build config 00:05:02.134 common/nitrox: not in enabled drivers build config 00:05:02.134 common/qat: not in enabled drivers build config 00:05:02.134 common/sfc_efx: not in enabled drivers build config 00:05:02.134 mempool/bucket: not in enabled drivers build config 00:05:02.134 mempool/cnxk: not in enabled drivers build config 00:05:02.134 mempool/dpaa: not in enabled drivers build config 00:05:02.134 mempool/dpaa2: not in enabled drivers build config 00:05:02.134 mempool/octeontx: not in enabled drivers build config 00:05:02.134 mempool/stack: not in enabled drivers build config 00:05:02.134 dma/cnxk: not in enabled drivers build config 00:05:02.134 dma/dpaa: not in enabled drivers build config 00:05:02.134 dma/dpaa2: not in enabled drivers build config 00:05:02.134 dma/hisilicon: not in enabled drivers build config 00:05:02.134 dma/idxd: not in enabled drivers build config 00:05:02.134 dma/ioat: not in enabled drivers build config 00:05:02.134 dma/skeleton: not in enabled drivers build config 00:05:02.134 net/af_packet: not in enabled drivers build config 00:05:02.134 net/af_xdp: not in enabled drivers build config 00:05:02.134 net/ark: not in enabled drivers build config 00:05:02.134 net/atlantic: not in enabled drivers build config 00:05:02.134 net/avp: not in enabled drivers build config 00:05:02.134 net/axgbe: not in enabled drivers build config 00:05:02.134 net/bnx2x: not in enabled drivers build config 00:05:02.134 net/bnxt: not in enabled drivers build config 00:05:02.134 net/bonding: not in enabled drivers build config 00:05:02.134 net/cnxk: not in enabled drivers build config 00:05:02.134 net/cpfl: not in enabled drivers build config 00:05:02.134 net/cxgbe: not in enabled drivers build config 00:05:02.134 net/dpaa: not in enabled drivers build config 00:05:02.134 net/dpaa2: not in enabled drivers build config 00:05:02.134 net/e1000: not in enabled drivers build config 00:05:02.134 net/ena: not in enabled drivers build config 00:05:02.134 net/enetc: not in enabled drivers build config 00:05:02.134 net/enetfec: not in enabled drivers build config 00:05:02.134 net/enic: not in enabled drivers build config 00:05:02.134 net/failsafe: not in enabled drivers build config 00:05:02.134 net/fm10k: not in enabled drivers build config 00:05:02.134 net/gve: not in enabled drivers build config 00:05:02.134 net/hinic: not in enabled drivers build config 00:05:02.134 net/hns3: not in enabled drivers build config 00:05:02.134 net/i40e: not in enabled drivers build config 00:05:02.134 net/iavf: not in enabled drivers build config 00:05:02.134 net/ice: not in enabled drivers build config 00:05:02.134 net/idpf: not in enabled drivers build config 00:05:02.134 net/igc: not in enabled drivers build config 00:05:02.134 net/ionic: not in enabled drivers build config 00:05:02.134 net/ipn3ke: not in enabled drivers build config 00:05:02.134 net/ixgbe: not in enabled drivers build config 00:05:02.134 net/mana: not in enabled drivers build config 00:05:02.134 net/memif: not in enabled drivers build config 00:05:02.134 net/mlx4: not in enabled drivers build config 00:05:02.134 net/mlx5: not in enabled drivers build config 00:05:02.134 net/mvneta: not in enabled drivers build config 00:05:02.134 net/mvpp2: not in enabled drivers build config 00:05:02.134 net/netvsc: not in enabled drivers build config 00:05:02.134 net/nfb: not in enabled drivers build config 00:05:02.134 net/nfp: not in enabled drivers build config 00:05:02.134 net/ngbe: not in enabled drivers build config 00:05:02.134 net/null: not in enabled drivers build config 00:05:02.134 net/octeontx: not in enabled drivers build config 00:05:02.134 net/octeon_ep: not in enabled drivers build config 00:05:02.134 net/pcap: not in enabled drivers build config 00:05:02.134 net/pfe: not in enabled drivers build config 00:05:02.134 net/qede: not in enabled drivers build config 00:05:02.134 net/ring: not in enabled drivers build config 00:05:02.134 net/sfc: not in enabled drivers build config 00:05:02.134 net/softnic: not in enabled drivers build config 00:05:02.134 net/tap: not in enabled drivers build config 00:05:02.134 net/thunderx: not in enabled drivers build config 00:05:02.134 net/txgbe: not in enabled drivers build config 00:05:02.134 net/vdev_netvsc: not in enabled drivers build config 00:05:02.134 net/vhost: not in enabled drivers build config 00:05:02.134 net/virtio: not in enabled drivers build config 00:05:02.134 net/vmxnet3: not in enabled drivers build config 00:05:02.134 raw/*: missing internal dependency, "rawdev" 00:05:02.134 crypto/armv8: not in enabled drivers build config 00:05:02.134 crypto/bcmfs: not in enabled drivers build config 00:05:02.134 crypto/caam_jr: not in enabled drivers build config 00:05:02.134 crypto/ccp: not in enabled drivers build config 00:05:02.134 crypto/cnxk: not in enabled drivers build config 00:05:02.134 crypto/dpaa_sec: not in enabled drivers build config 00:05:02.134 crypto/dpaa2_sec: not in enabled drivers build config 00:05:02.134 crypto/ipsec_mb: not in enabled drivers build config 00:05:02.134 crypto/mlx5: not in enabled drivers build config 00:05:02.134 crypto/mvsam: not in enabled drivers build config 00:05:02.134 crypto/nitrox: not in enabled drivers build config 00:05:02.134 crypto/null: not in enabled drivers build config 00:05:02.134 crypto/octeontx: not in enabled drivers build config 00:05:02.134 crypto/openssl: not in enabled drivers build config 00:05:02.134 crypto/scheduler: not in enabled drivers build config 00:05:02.134 crypto/uadk: not in enabled drivers build config 00:05:02.134 crypto/virtio: not in enabled drivers build config 00:05:02.134 compress/isal: not in enabled drivers build config 00:05:02.134 compress/mlx5: not in enabled drivers build config 00:05:02.134 compress/nitrox: not in enabled drivers build config 00:05:02.134 compress/octeontx: not in enabled drivers build config 00:05:02.134 compress/zlib: not in enabled drivers build config 00:05:02.134 regex/*: missing internal dependency, "regexdev" 00:05:02.134 ml/*: missing internal dependency, "mldev" 00:05:02.134 vdpa/ifc: not in enabled drivers build config 00:05:02.134 vdpa/mlx5: not in enabled drivers build config 00:05:02.134 vdpa/nfp: not in enabled drivers build config 00:05:02.134 vdpa/sfc: not in enabled drivers build config 00:05:02.134 event/*: missing internal dependency, "eventdev" 00:05:02.134 baseband/*: missing internal dependency, "bbdev" 00:05:02.134 gpu/*: missing internal dependency, "gpudev" 00:05:02.134 00:05:02.134 00:05:02.134 Build targets in project: 85 00:05:02.134 00:05:02.134 DPDK 24.03.0 00:05:02.134 00:05:02.134 User defined options 00:05:02.134 buildtype : debug 00:05:02.134 default_library : shared 00:05:02.134 libdir : lib 00:05:02.134 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:02.134 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:02.134 c_link_args : 00:05:02.134 cpu_instruction_set: native 00:05:02.134 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:05:02.134 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:05:02.134 enable_docs : false 00:05:02.134 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:02.134 enable_kmods : false 00:05:02.134 max_lcores : 128 00:05:02.134 tests : false 00:05:02.134 00:05:02.134 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:02.134 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:02.134 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:02.134 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:02.134 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:02.134 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:02.134 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:02.134 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:02.134 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:02.134 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:02.134 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:02.135 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:02.135 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:02.135 [12/268] Linking static target lib/librte_kvargs.a 00:05:02.135 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:02.135 [14/268] Linking static target lib/librte_log.a 00:05:02.393 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:02.393 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:02.967 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.967 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:02.967 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:02.967 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:02.967 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:02.967 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:02.967 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:02.967 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:02.967 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:02.967 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:02.967 [27/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:02.967 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:02.967 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:02.967 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:02.967 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:02.967 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:02.967 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:02.967 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:02.968 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:03.232 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:03.232 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:03.232 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:03.232 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:03.232 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:03.232 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:03.232 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:03.232 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:03.232 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:03.232 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:03.232 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:03.232 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:03.232 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:03.232 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:03.232 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:03.232 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:03.232 [52/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:03.232 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:03.232 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:03.232 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:03.232 [56/268] Linking static target lib/librte_telemetry.a 00:05:03.232 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:03.232 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:03.232 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:03.232 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:03.232 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:03.232 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:03.494 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:03.494 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:03.494 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:03.494 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:03.494 [67/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.494 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:03.494 [69/268] Linking static target lib/librte_pci.a 00:05:03.756 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:03.756 [71/268] Linking target lib/librte_log.so.24.1 00:05:03.756 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:03.756 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:03.756 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:04.022 [75/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:04.022 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:04.022 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:04.022 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:04.022 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:04.022 [80/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:04.022 [81/268] Linking target lib/librte_kvargs.so.24.1 00:05:04.022 [82/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:04.022 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:04.022 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:04.022 [85/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:04.022 [86/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.022 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:04.022 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:04.022 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:04.022 [90/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:04.022 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:04.022 [92/268] Linking static target lib/librte_ring.a 00:05:04.022 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:04.022 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:04.022 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:04.022 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:04.022 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:04.285 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:04.286 [99/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:04.286 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:04.286 [101/268] Linking static target lib/librte_meter.a 00:05:04.286 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:04.286 [103/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.286 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:04.286 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:04.286 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:04.286 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:04.286 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:04.286 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:04.286 [110/268] Linking target lib/librte_telemetry.so.24.1 00:05:04.286 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:04.286 [112/268] Linking static target lib/librte_eal.a 00:05:04.286 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:04.286 [114/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:04.286 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:04.286 [116/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:04.286 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:04.286 [118/268] Linking static target lib/librte_mempool.a 00:05:04.286 [119/268] Linking static target lib/librte_rcu.a 00:05:04.286 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:04.286 [121/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:04.286 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:04.286 [123/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:04.553 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:04.553 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:04.553 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:04.553 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:04.553 [128/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:04.553 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:04.553 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:04.553 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:04.553 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:04.867 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:04.867 [134/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:04.867 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:04.867 [136/268] Linking static target lib/librte_net.a 00:05:04.867 [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.867 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.867 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:04.867 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:04.867 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:04.867 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:04.867 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:04.867 [144/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:04.867 [145/268] Linking static target lib/librte_cmdline.a 00:05:05.127 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:05.127 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:05.127 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:05.127 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:05.127 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.127 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:05.127 [152/268] Linking static target lib/librte_timer.a 00:05:05.127 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:05.127 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:05.127 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:05.127 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:05.387 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.387 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:05.387 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:05.387 [160/268] Linking static target lib/librte_dmadev.a 00:05:05.387 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:05.387 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:05.387 [163/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:05.387 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:05.388 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:05.388 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:05.388 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.647 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:05.647 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:05.647 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.647 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:05.647 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:05.647 [173/268] Linking static target lib/librte_power.a 00:05:05.647 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:05.647 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:05.647 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:05.647 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:05.647 [178/268] Linking static target lib/librte_hash.a 00:05:05.647 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:05.647 [180/268] Linking static target lib/librte_compressdev.a 00:05:05.647 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:05.647 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:05.647 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:05.906 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:05.906 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:05.906 [186/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.906 [187/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.906 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:05.906 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:05.906 [190/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:05.906 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:05.906 [192/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:05.906 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:05.906 [194/268] Linking static target lib/librte_reorder.a 00:05:05.906 [195/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:05.906 [196/268] Linking static target lib/librte_mbuf.a 00:05:05.906 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:06.164 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:06.164 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:06.164 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:06.164 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:06.164 [202/268] Linking static target lib/librte_security.a 00:05:06.164 [203/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:06.164 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:06.164 [205/268] Linking static target drivers/librte_bus_pci.a 00:05:06.164 [206/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:06.164 [207/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.164 [208/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.164 [209/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:06.164 [210/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:06.164 [211/268] Linking static target drivers/librte_bus_vdev.a 00:05:06.164 [212/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.164 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.164 [214/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:06.422 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:06.423 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:06.423 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:06.423 [218/268] Linking static target drivers/librte_mempool_ring.a 00:05:06.423 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.423 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:06.423 [221/268] Linking static target lib/librte_ethdev.a 00:05:06.423 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.423 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:06.423 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:06.423 [225/268] Linking static target lib/librte_cryptodev.a 00:05:06.681 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.614 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:08.989 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:10.889 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.889 [230/268] Linking target lib/librte_eal.so.24.1 00:05:10.889 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:10.889 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:10.889 [233/268] Linking target lib/librte_timer.so.24.1 00:05:10.889 [234/268] Linking target lib/librte_ring.so.24.1 00:05:10.889 [235/268] Linking target lib/librte_meter.so.24.1 00:05:10.889 [236/268] Linking target lib/librte_dmadev.so.24.1 00:05:10.889 [237/268] Linking target lib/librte_pci.so.24.1 00:05:10.889 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:10.889 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:10.889 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:10.889 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:10.889 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:10.889 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:11.148 [244/268] Linking target lib/librte_rcu.so.24.1 00:05:11.148 [245/268] Linking target lib/librte_mempool.so.24.1 00:05:11.148 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:11.148 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:11.148 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:11.148 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:11.148 [250/268] Linking target lib/librte_mbuf.so.24.1 00:05:11.405 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:11.405 [252/268] Linking target lib/librte_reorder.so.24.1 00:05:11.405 [253/268] Linking target lib/librte_compressdev.so.24.1 00:05:11.405 [254/268] Linking target lib/librte_net.so.24.1 00:05:11.405 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:05:11.405 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:11.405 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:11.664 [258/268] Linking target lib/librte_security.so.24.1 00:05:11.664 [259/268] Linking target lib/librte_hash.so.24.1 00:05:11.664 [260/268] Linking target lib/librte_cmdline.so.24.1 00:05:11.664 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:11.664 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:11.664 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:11.664 [264/268] Linking target lib/librte_power.so.24.1 00:05:14.946 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:14.946 [266/268] Linking static target lib/librte_vhost.a 00:05:15.880 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:15.880 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:15.880 INFO: autodetecting backend as ninja 00:05:15.880 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:05:37.801 CC lib/ut_mock/mock.o 00:05:37.801 CC lib/ut/ut.o 00:05:37.801 CC lib/log/log.o 00:05:37.801 CC lib/log/log_flags.o 00:05:37.801 CC lib/log/log_deprecated.o 00:05:37.801 LIB libspdk_ut_mock.a 00:05:37.801 LIB libspdk_ut.a 00:05:37.801 LIB libspdk_log.a 00:05:37.801 SO libspdk_ut_mock.so.6.0 00:05:37.801 SO libspdk_ut.so.2.0 00:05:37.801 SO libspdk_log.so.7.1 00:05:37.801 SYMLINK libspdk_ut.so 00:05:37.801 SYMLINK libspdk_ut_mock.so 00:05:37.801 SYMLINK libspdk_log.so 00:05:37.801 CC lib/dma/dma.o 00:05:37.801 CC lib/util/base64.o 00:05:37.801 CXX lib/trace_parser/trace.o 00:05:37.801 CC lib/util/bit_array.o 00:05:37.801 CC lib/util/cpuset.o 00:05:37.801 CC lib/ioat/ioat.o 00:05:37.801 CC lib/util/crc16.o 00:05:37.801 CC lib/util/crc32.o 00:05:37.801 CC lib/util/crc32c.o 00:05:37.801 CC lib/util/crc32_ieee.o 00:05:37.801 CC lib/util/crc64.o 00:05:37.801 CC lib/util/dif.o 00:05:37.801 CC lib/util/fd.o 00:05:37.801 CC lib/util/fd_group.o 00:05:37.801 CC lib/util/file.o 00:05:37.801 CC lib/util/hexlify.o 00:05:37.801 CC lib/util/iov.o 00:05:37.801 CC lib/util/math.o 00:05:37.801 CC lib/util/net.o 00:05:37.801 CC lib/util/pipe.o 00:05:37.801 CC lib/util/strerror_tls.o 00:05:37.801 CC lib/util/string.o 00:05:37.801 CC lib/util/uuid.o 00:05:37.801 CC lib/util/xor.o 00:05:37.801 CC lib/util/md5.o 00:05:37.801 CC lib/util/zipf.o 00:05:37.801 CC lib/vfio_user/host/vfio_user_pci.o 00:05:37.801 CC lib/vfio_user/host/vfio_user.o 00:05:37.801 LIB libspdk_ioat.a 00:05:37.801 LIB libspdk_dma.a 00:05:37.801 SO libspdk_ioat.so.7.0 00:05:37.801 SO libspdk_dma.so.5.0 00:05:37.801 LIB libspdk_vfio_user.a 00:05:37.801 SYMLINK libspdk_ioat.so 00:05:37.801 SYMLINK libspdk_dma.so 00:05:37.801 SO libspdk_vfio_user.so.5.0 00:05:37.801 SYMLINK libspdk_vfio_user.so 00:05:37.801 LIB libspdk_util.a 00:05:37.801 SO libspdk_util.so.10.0 00:05:37.801 SYMLINK libspdk_util.so 00:05:37.801 LIB libspdk_trace_parser.a 00:05:37.801 SO libspdk_trace_parser.so.6.0 00:05:37.801 CC lib/json/json_parse.o 00:05:37.801 CC lib/vmd/vmd.o 00:05:37.801 CC lib/json/json_util.o 00:05:37.801 CC lib/rdma_utils/rdma_utils.o 00:05:37.801 CC lib/vmd/led.o 00:05:37.801 CC lib/json/json_write.o 00:05:37.801 CC lib/rdma_provider/common.o 00:05:37.801 CC lib/conf/conf.o 00:05:37.801 CC lib/idxd/idxd.o 00:05:37.801 CC lib/env_dpdk/env.o 00:05:37.801 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:37.801 CC lib/idxd/idxd_user.o 00:05:37.801 CC lib/env_dpdk/memory.o 00:05:37.801 CC lib/idxd/idxd_kernel.o 00:05:37.801 CC lib/env_dpdk/pci.o 00:05:37.801 CC lib/env_dpdk/init.o 00:05:37.801 CC lib/env_dpdk/threads.o 00:05:37.801 CC lib/env_dpdk/pci_ioat.o 00:05:37.801 CC lib/env_dpdk/pci_virtio.o 00:05:37.801 CC lib/env_dpdk/pci_vmd.o 00:05:37.801 CC lib/env_dpdk/pci_idxd.o 00:05:37.801 CC lib/env_dpdk/pci_event.o 00:05:37.801 CC lib/env_dpdk/sigbus_handler.o 00:05:37.801 CC lib/env_dpdk/pci_dpdk.o 00:05:37.801 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:37.801 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:37.801 SYMLINK libspdk_trace_parser.so 00:05:37.801 LIB libspdk_rdma_provider.a 00:05:37.801 LIB libspdk_json.a 00:05:37.801 SO libspdk_rdma_provider.so.6.0 00:05:37.801 SO libspdk_json.so.6.0 00:05:37.801 LIB libspdk_conf.a 00:05:37.801 LIB libspdk_rdma_utils.a 00:05:37.801 SO libspdk_conf.so.6.0 00:05:37.801 SO libspdk_rdma_utils.so.1.0 00:05:37.801 SYMLINK libspdk_rdma_provider.so 00:05:37.801 SYMLINK libspdk_json.so 00:05:37.801 SYMLINK libspdk_conf.so 00:05:37.801 SYMLINK libspdk_rdma_utils.so 00:05:37.801 CC lib/jsonrpc/jsonrpc_server.o 00:05:37.801 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:37.801 CC lib/jsonrpc/jsonrpc_client.o 00:05:37.801 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:37.801 LIB libspdk_idxd.a 00:05:37.801 SO libspdk_idxd.so.12.1 00:05:37.801 LIB libspdk_vmd.a 00:05:37.801 SYMLINK libspdk_idxd.so 00:05:37.801 SO libspdk_vmd.so.6.0 00:05:37.801 SYMLINK libspdk_vmd.so 00:05:37.801 LIB libspdk_jsonrpc.a 00:05:37.801 SO libspdk_jsonrpc.so.6.0 00:05:37.801 SYMLINK libspdk_jsonrpc.so 00:05:37.801 CC lib/rpc/rpc.o 00:05:38.059 LIB libspdk_rpc.a 00:05:38.059 SO libspdk_rpc.so.6.0 00:05:38.059 SYMLINK libspdk_rpc.so 00:05:38.317 CC lib/notify/notify.o 00:05:38.317 CC lib/notify/notify_rpc.o 00:05:38.317 CC lib/trace/trace.o 00:05:38.317 CC lib/trace/trace_flags.o 00:05:38.317 CC lib/keyring/keyring.o 00:05:38.317 CC lib/trace/trace_rpc.o 00:05:38.317 CC lib/keyring/keyring_rpc.o 00:05:38.317 LIB libspdk_notify.a 00:05:38.317 SO libspdk_notify.so.6.0 00:05:38.317 SYMLINK libspdk_notify.so 00:05:38.317 LIB libspdk_keyring.a 00:05:38.576 LIB libspdk_trace.a 00:05:38.576 SO libspdk_keyring.so.2.0 00:05:38.576 SO libspdk_trace.so.11.0 00:05:38.576 SYMLINK libspdk_keyring.so 00:05:38.576 SYMLINK libspdk_trace.so 00:05:38.576 LIB libspdk_env_dpdk.a 00:05:38.576 CC lib/thread/thread.o 00:05:38.576 CC lib/thread/iobuf.o 00:05:38.576 CC lib/sock/sock.o 00:05:38.576 CC lib/sock/sock_rpc.o 00:05:38.835 SO libspdk_env_dpdk.so.15.1 00:05:38.835 SYMLINK libspdk_env_dpdk.so 00:05:39.092 LIB libspdk_sock.a 00:05:39.092 SO libspdk_sock.so.10.0 00:05:39.092 SYMLINK libspdk_sock.so 00:05:39.351 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:39.351 CC lib/nvme/nvme_ctrlr.o 00:05:39.351 CC lib/nvme/nvme_fabric.o 00:05:39.351 CC lib/nvme/nvme_ns_cmd.o 00:05:39.351 CC lib/nvme/nvme_ns.o 00:05:39.351 CC lib/nvme/nvme_pcie_common.o 00:05:39.351 CC lib/nvme/nvme_pcie.o 00:05:39.351 CC lib/nvme/nvme_qpair.o 00:05:39.351 CC lib/nvme/nvme.o 00:05:39.351 CC lib/nvme/nvme_quirks.o 00:05:39.351 CC lib/nvme/nvme_transport.o 00:05:39.351 CC lib/nvme/nvme_discovery.o 00:05:39.351 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:39.351 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:39.351 CC lib/nvme/nvme_tcp.o 00:05:39.351 CC lib/nvme/nvme_opal.o 00:05:39.351 CC lib/nvme/nvme_io_msg.o 00:05:39.351 CC lib/nvme/nvme_poll_group.o 00:05:39.351 CC lib/nvme/nvme_zns.o 00:05:39.351 CC lib/nvme/nvme_stubs.o 00:05:39.351 CC lib/nvme/nvme_auth.o 00:05:39.351 CC lib/nvme/nvme_cuse.o 00:05:39.351 CC lib/nvme/nvme_vfio_user.o 00:05:39.351 CC lib/nvme/nvme_rdma.o 00:05:40.286 LIB libspdk_thread.a 00:05:40.286 SO libspdk_thread.so.11.0 00:05:40.286 SYMLINK libspdk_thread.so 00:05:40.545 CC lib/blob/blobstore.o 00:05:40.545 CC lib/accel/accel.o 00:05:40.545 CC lib/fsdev/fsdev.o 00:05:40.545 CC lib/virtio/virtio.o 00:05:40.545 CC lib/init/json_config.o 00:05:40.545 CC lib/vfu_tgt/tgt_endpoint.o 00:05:40.545 CC lib/fsdev/fsdev_io.o 00:05:40.545 CC lib/accel/accel_rpc.o 00:05:40.545 CC lib/virtio/virtio_vhost_user.o 00:05:40.545 CC lib/blob/request.o 00:05:40.545 CC lib/accel/accel_sw.o 00:05:40.545 CC lib/vfu_tgt/tgt_rpc.o 00:05:40.545 CC lib/init/subsystem.o 00:05:40.545 CC lib/virtio/virtio_vfio_user.o 00:05:40.545 CC lib/fsdev/fsdev_rpc.o 00:05:40.545 CC lib/blob/zeroes.o 00:05:40.545 CC lib/init/subsystem_rpc.o 00:05:40.545 CC lib/init/rpc.o 00:05:40.545 CC lib/blob/blob_bs_dev.o 00:05:40.545 CC lib/virtio/virtio_pci.o 00:05:40.804 LIB libspdk_init.a 00:05:40.804 SO libspdk_init.so.6.0 00:05:40.804 LIB libspdk_virtio.a 00:05:40.804 SYMLINK libspdk_init.so 00:05:40.804 LIB libspdk_vfu_tgt.a 00:05:41.062 SO libspdk_virtio.so.7.0 00:05:41.062 SO libspdk_vfu_tgt.so.3.0 00:05:41.062 SYMLINK libspdk_virtio.so 00:05:41.062 SYMLINK libspdk_vfu_tgt.so 00:05:41.062 CC lib/event/app.o 00:05:41.062 CC lib/event/reactor.o 00:05:41.062 CC lib/event/log_rpc.o 00:05:41.062 CC lib/event/app_rpc.o 00:05:41.062 CC lib/event/scheduler_static.o 00:05:41.320 LIB libspdk_fsdev.a 00:05:41.320 SO libspdk_fsdev.so.2.0 00:05:41.320 SYMLINK libspdk_fsdev.so 00:05:41.578 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:41.578 LIB libspdk_event.a 00:05:41.578 SO libspdk_event.so.14.0 00:05:41.578 SYMLINK libspdk_event.so 00:05:41.835 LIB libspdk_accel.a 00:05:41.835 SO libspdk_accel.so.16.1 00:05:41.835 LIB libspdk_nvme.a 00:05:41.835 SYMLINK libspdk_accel.so 00:05:41.835 SO libspdk_nvme.so.14.1 00:05:41.835 CC lib/bdev/bdev.o 00:05:41.835 CC lib/bdev/bdev_rpc.o 00:05:41.835 CC lib/bdev/bdev_zone.o 00:05:42.092 CC lib/bdev/part.o 00:05:42.092 CC lib/bdev/scsi_nvme.o 00:05:42.092 LIB libspdk_fuse_dispatcher.a 00:05:42.092 SYMLINK libspdk_nvme.so 00:05:42.092 SO libspdk_fuse_dispatcher.so.1.0 00:05:42.092 SYMLINK libspdk_fuse_dispatcher.so 00:05:43.990 LIB libspdk_blob.a 00:05:43.990 SO libspdk_blob.so.11.0 00:05:43.990 SYMLINK libspdk_blob.so 00:05:43.990 CC lib/blobfs/blobfs.o 00:05:43.990 CC lib/blobfs/tree.o 00:05:43.990 CC lib/lvol/lvol.o 00:05:44.932 LIB libspdk_blobfs.a 00:05:44.932 LIB libspdk_bdev.a 00:05:44.932 SO libspdk_blobfs.so.10.0 00:05:44.932 SO libspdk_bdev.so.17.0 00:05:44.932 SYMLINK libspdk_blobfs.so 00:05:44.932 SYMLINK libspdk_bdev.so 00:05:44.932 LIB libspdk_lvol.a 00:05:44.932 SO libspdk_lvol.so.10.0 00:05:44.932 SYMLINK libspdk_lvol.so 00:05:44.932 CC lib/nbd/nbd.o 00:05:44.932 CC lib/nbd/nbd_rpc.o 00:05:44.932 CC lib/nvmf/ctrlr.o 00:05:44.932 CC lib/ublk/ublk.o 00:05:44.932 CC lib/scsi/dev.o 00:05:44.932 CC lib/ftl/ftl_core.o 00:05:44.932 CC lib/scsi/lun.o 00:05:44.932 CC lib/ublk/ublk_rpc.o 00:05:44.932 CC lib/nvmf/ctrlr_discovery.o 00:05:44.932 CC lib/scsi/port.o 00:05:44.932 CC lib/ftl/ftl_init.o 00:05:44.932 CC lib/nvmf/ctrlr_bdev.o 00:05:44.932 CC lib/scsi/scsi.o 00:05:44.932 CC lib/ftl/ftl_layout.o 00:05:44.932 CC lib/nvmf/subsystem.o 00:05:44.932 CC lib/scsi/scsi_bdev.o 00:05:44.932 CC lib/ftl/ftl_debug.o 00:05:44.932 CC lib/nvmf/nvmf.o 00:05:44.932 CC lib/scsi/scsi_pr.o 00:05:44.932 CC lib/ftl/ftl_io.o 00:05:44.932 CC lib/nvmf/nvmf_rpc.o 00:05:44.932 CC lib/scsi/scsi_rpc.o 00:05:44.932 CC lib/nvmf/transport.o 00:05:44.932 CC lib/ftl/ftl_sb.o 00:05:44.932 CC lib/nvmf/tcp.o 00:05:44.932 CC lib/ftl/ftl_l2p.o 00:05:44.932 CC lib/scsi/task.o 00:05:44.932 CC lib/nvmf/stubs.o 00:05:44.932 CC lib/ftl/ftl_l2p_flat.o 00:05:44.932 CC lib/ftl/ftl_nv_cache.o 00:05:44.932 CC lib/nvmf/mdns_server.o 00:05:44.932 CC lib/ftl/ftl_band.o 00:05:44.932 CC lib/nvmf/rdma.o 00:05:44.932 CC lib/nvmf/vfio_user.o 00:05:44.932 CC lib/ftl/ftl_band_ops.o 00:05:44.932 CC lib/ftl/ftl_writer.o 00:05:44.932 CC lib/nvmf/auth.o 00:05:44.932 CC lib/ftl/ftl_rq.o 00:05:44.932 CC lib/ftl/ftl_reloc.o 00:05:44.932 CC lib/ftl/ftl_l2p_cache.o 00:05:44.932 CC lib/ftl/ftl_p2l.o 00:05:44.932 CC lib/ftl/ftl_p2l_log.o 00:05:44.932 CC lib/ftl/mngt/ftl_mngt.o 00:05:44.932 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:44.932 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:44.932 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:44.932 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:44.932 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:45.507 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:45.507 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:45.507 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:45.507 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:45.507 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:45.507 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:45.507 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:45.507 CC lib/ftl/utils/ftl_conf.o 00:05:45.507 CC lib/ftl/utils/ftl_md.o 00:05:45.507 CC lib/ftl/utils/ftl_mempool.o 00:05:45.507 CC lib/ftl/utils/ftl_bitmap.o 00:05:45.507 CC lib/ftl/utils/ftl_property.o 00:05:45.507 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:45.507 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:45.507 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:45.507 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:45.507 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:45.507 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:45.507 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:45.767 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:45.768 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:45.768 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:45.768 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:45.768 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:45.768 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:45.768 CC lib/ftl/base/ftl_base_dev.o 00:05:45.768 CC lib/ftl/base/ftl_base_bdev.o 00:05:45.768 CC lib/ftl/ftl_trace.o 00:05:45.768 LIB libspdk_nbd.a 00:05:46.026 SO libspdk_nbd.so.7.0 00:05:46.026 LIB libspdk_scsi.a 00:05:46.026 SO libspdk_scsi.so.9.0 00:05:46.026 SYMLINK libspdk_nbd.so 00:05:46.026 SYMLINK libspdk_scsi.so 00:05:46.026 LIB libspdk_ublk.a 00:05:46.026 SO libspdk_ublk.so.3.0 00:05:46.284 SYMLINK libspdk_ublk.so 00:05:46.284 CC lib/iscsi/conn.o 00:05:46.284 CC lib/vhost/vhost.o 00:05:46.284 CC lib/vhost/vhost_rpc.o 00:05:46.284 CC lib/iscsi/init_grp.o 00:05:46.284 CC lib/iscsi/iscsi.o 00:05:46.284 CC lib/vhost/vhost_scsi.o 00:05:46.284 CC lib/vhost/vhost_blk.o 00:05:46.284 CC lib/iscsi/param.o 00:05:46.284 CC lib/vhost/rte_vhost_user.o 00:05:46.284 CC lib/iscsi/portal_grp.o 00:05:46.284 CC lib/iscsi/tgt_node.o 00:05:46.284 CC lib/iscsi/iscsi_subsystem.o 00:05:46.284 CC lib/iscsi/iscsi_rpc.o 00:05:46.284 CC lib/iscsi/task.o 00:05:46.549 LIB libspdk_ftl.a 00:05:46.549 SO libspdk_ftl.so.9.0 00:05:46.809 SYMLINK libspdk_ftl.so 00:05:47.376 LIB libspdk_vhost.a 00:05:47.634 SO libspdk_vhost.so.8.0 00:05:47.634 SYMLINK libspdk_vhost.so 00:05:47.634 LIB libspdk_iscsi.a 00:05:47.634 LIB libspdk_nvmf.a 00:05:47.634 SO libspdk_iscsi.so.8.0 00:05:47.893 SO libspdk_nvmf.so.20.0 00:05:47.893 SYMLINK libspdk_iscsi.so 00:05:47.893 SYMLINK libspdk_nvmf.so 00:05:48.151 CC module/vfu_device/vfu_virtio.o 00:05:48.151 CC module/vfu_device/vfu_virtio_blk.o 00:05:48.151 CC module/vfu_device/vfu_virtio_scsi.o 00:05:48.151 CC module/env_dpdk/env_dpdk_rpc.o 00:05:48.151 CC module/vfu_device/vfu_virtio_rpc.o 00:05:48.151 CC module/vfu_device/vfu_virtio_fs.o 00:05:48.409 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:48.409 CC module/accel/error/accel_error.o 00:05:48.409 CC module/accel/dsa/accel_dsa.o 00:05:48.409 CC module/keyring/linux/keyring.o 00:05:48.409 CC module/sock/posix/posix.o 00:05:48.409 CC module/scheduler/gscheduler/gscheduler.o 00:05:48.409 CC module/accel/dsa/accel_dsa_rpc.o 00:05:48.409 CC module/accel/error/accel_error_rpc.o 00:05:48.409 CC module/blob/bdev/blob_bdev.o 00:05:48.409 CC module/fsdev/aio/fsdev_aio.o 00:05:48.409 CC module/keyring/file/keyring.o 00:05:48.409 CC module/keyring/linux/keyring_rpc.o 00:05:48.409 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:48.409 CC module/fsdev/aio/linux_aio_mgr.o 00:05:48.409 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:48.409 CC module/keyring/file/keyring_rpc.o 00:05:48.409 CC module/accel/iaa/accel_iaa.o 00:05:48.409 CC module/accel/iaa/accel_iaa_rpc.o 00:05:48.409 CC module/accel/ioat/accel_ioat.o 00:05:48.409 CC module/accel/ioat/accel_ioat_rpc.o 00:05:48.409 LIB libspdk_env_dpdk_rpc.a 00:05:48.409 SO libspdk_env_dpdk_rpc.so.6.0 00:05:48.409 SYMLINK libspdk_env_dpdk_rpc.so 00:05:48.667 LIB libspdk_keyring_linux.a 00:05:48.667 LIB libspdk_keyring_file.a 00:05:48.667 LIB libspdk_scheduler_dpdk_governor.a 00:05:48.667 LIB libspdk_scheduler_gscheduler.a 00:05:48.667 SO libspdk_keyring_linux.so.1.0 00:05:48.667 SO libspdk_keyring_file.so.2.0 00:05:48.667 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:48.667 SO libspdk_scheduler_gscheduler.so.4.0 00:05:48.667 LIB libspdk_accel_ioat.a 00:05:48.667 LIB libspdk_scheduler_dynamic.a 00:05:48.667 LIB libspdk_accel_iaa.a 00:05:48.667 SO libspdk_accel_ioat.so.6.0 00:05:48.667 SO libspdk_scheduler_dynamic.so.4.0 00:05:48.667 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:48.667 SYMLINK libspdk_keyring_linux.so 00:05:48.667 SYMLINK libspdk_scheduler_gscheduler.so 00:05:48.667 SYMLINK libspdk_keyring_file.so 00:05:48.667 SO libspdk_accel_iaa.so.3.0 00:05:48.667 SYMLINK libspdk_scheduler_dynamic.so 00:05:48.667 SYMLINK libspdk_accel_ioat.so 00:05:48.667 LIB libspdk_accel_error.a 00:05:48.667 SYMLINK libspdk_accel_iaa.so 00:05:48.667 LIB libspdk_blob_bdev.a 00:05:48.667 SO libspdk_accel_error.so.2.0 00:05:48.667 SO libspdk_blob_bdev.so.11.0 00:05:48.667 SYMLINK libspdk_accel_error.so 00:05:48.667 SYMLINK libspdk_blob_bdev.so 00:05:48.667 LIB libspdk_accel_dsa.a 00:05:48.933 SO libspdk_accel_dsa.so.5.0 00:05:48.933 SYMLINK libspdk_accel_dsa.so 00:05:48.933 CC module/blobfs/bdev/blobfs_bdev.o 00:05:48.933 CC module/bdev/error/vbdev_error.o 00:05:48.933 CC module/bdev/lvol/vbdev_lvol.o 00:05:48.933 CC module/bdev/raid/bdev_raid.o 00:05:48.933 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:48.933 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:48.933 CC module/bdev/gpt/gpt.o 00:05:48.933 CC module/bdev/error/vbdev_error_rpc.o 00:05:48.933 CC module/bdev/raid/bdev_raid_rpc.o 00:05:48.933 CC module/bdev/gpt/vbdev_gpt.o 00:05:48.933 CC module/bdev/raid/bdev_raid_sb.o 00:05:48.933 CC module/bdev/nvme/bdev_nvme.o 00:05:48.933 CC module/bdev/passthru/vbdev_passthru.o 00:05:48.933 CC module/bdev/raid/raid0.o 00:05:48.933 CC module/bdev/delay/vbdev_delay.o 00:05:48.933 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:48.933 CC module/bdev/malloc/bdev_malloc.o 00:05:48.933 CC module/bdev/raid/raid1.o 00:05:48.933 CC module/bdev/null/bdev_null.o 00:05:48.933 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:48.933 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:48.933 CC module/bdev/null/bdev_null_rpc.o 00:05:48.933 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:48.933 CC module/bdev/nvme/nvme_rpc.o 00:05:48.933 CC module/bdev/raid/concat.o 00:05:48.933 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:48.933 CC module/bdev/nvme/bdev_mdns_client.o 00:05:48.933 CC module/bdev/split/vbdev_split.o 00:05:48.933 CC module/bdev/aio/bdev_aio.o 00:05:48.933 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:48.933 CC module/bdev/nvme/vbdev_opal.o 00:05:48.933 CC module/bdev/aio/bdev_aio_rpc.o 00:05:48.933 CC module/bdev/split/vbdev_split_rpc.o 00:05:48.933 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:48.933 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:48.933 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:48.933 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:48.933 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:48.933 CC module/bdev/ftl/bdev_ftl.o 00:05:48.934 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:48.934 CC module/bdev/iscsi/bdev_iscsi.o 00:05:48.934 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:48.934 LIB libspdk_vfu_device.a 00:05:49.192 SO libspdk_vfu_device.so.3.0 00:05:49.193 SYMLINK libspdk_vfu_device.so 00:05:49.193 LIB libspdk_fsdev_aio.a 00:05:49.451 SO libspdk_fsdev_aio.so.1.0 00:05:49.451 LIB libspdk_sock_posix.a 00:05:49.451 SO libspdk_sock_posix.so.6.0 00:05:49.451 SYMLINK libspdk_fsdev_aio.so 00:05:49.451 LIB libspdk_blobfs_bdev.a 00:05:49.451 SO libspdk_blobfs_bdev.so.6.0 00:05:49.451 LIB libspdk_bdev_gpt.a 00:05:49.451 LIB libspdk_bdev_split.a 00:05:49.451 SYMLINK libspdk_sock_posix.so 00:05:49.451 SYMLINK libspdk_blobfs_bdev.so 00:05:49.452 SO libspdk_bdev_gpt.so.6.0 00:05:49.452 SO libspdk_bdev_split.so.6.0 00:05:49.452 LIB libspdk_bdev_null.a 00:05:49.452 LIB libspdk_bdev_ftl.a 00:05:49.452 SYMLINK libspdk_bdev_gpt.so 00:05:49.452 SO libspdk_bdev_null.so.6.0 00:05:49.452 SO libspdk_bdev_ftl.so.6.0 00:05:49.452 LIB libspdk_bdev_error.a 00:05:49.710 SYMLINK libspdk_bdev_split.so 00:05:49.710 LIB libspdk_bdev_iscsi.a 00:05:49.710 LIB libspdk_bdev_aio.a 00:05:49.710 LIB libspdk_bdev_passthru.a 00:05:49.710 SO libspdk_bdev_error.so.6.0 00:05:49.710 SO libspdk_bdev_iscsi.so.6.0 00:05:49.710 SO libspdk_bdev_aio.so.6.0 00:05:49.710 LIB libspdk_bdev_malloc.a 00:05:49.710 SO libspdk_bdev_passthru.so.6.0 00:05:49.710 LIB libspdk_bdev_delay.a 00:05:49.710 SYMLINK libspdk_bdev_null.so 00:05:49.710 LIB libspdk_bdev_zone_block.a 00:05:49.710 SYMLINK libspdk_bdev_ftl.so 00:05:49.710 SO libspdk_bdev_malloc.so.6.0 00:05:49.710 SO libspdk_bdev_delay.so.6.0 00:05:49.710 SO libspdk_bdev_zone_block.so.6.0 00:05:49.710 SYMLINK libspdk_bdev_error.so 00:05:49.710 SYMLINK libspdk_bdev_iscsi.so 00:05:49.710 SYMLINK libspdk_bdev_aio.so 00:05:49.710 SYMLINK libspdk_bdev_passthru.so 00:05:49.710 SYMLINK libspdk_bdev_delay.so 00:05:49.710 SYMLINK libspdk_bdev_malloc.so 00:05:49.710 SYMLINK libspdk_bdev_zone_block.so 00:05:49.710 LIB libspdk_bdev_virtio.a 00:05:49.710 SO libspdk_bdev_virtio.so.6.0 00:05:49.710 LIB libspdk_bdev_lvol.a 00:05:49.970 SO libspdk_bdev_lvol.so.6.0 00:05:49.970 SYMLINK libspdk_bdev_virtio.so 00:05:49.970 SYMLINK libspdk_bdev_lvol.so 00:05:50.228 LIB libspdk_bdev_raid.a 00:05:50.228 SO libspdk_bdev_raid.so.6.0 00:05:50.487 SYMLINK libspdk_bdev_raid.so 00:05:51.867 LIB libspdk_bdev_nvme.a 00:05:51.867 SO libspdk_bdev_nvme.so.7.0 00:05:51.867 SYMLINK libspdk_bdev_nvme.so 00:05:52.433 CC module/event/subsystems/iobuf/iobuf.o 00:05:52.433 CC module/event/subsystems/vmd/vmd.o 00:05:52.433 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:52.433 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:52.433 CC module/event/subsystems/sock/sock.o 00:05:52.433 CC module/event/subsystems/keyring/keyring.o 00:05:52.433 CC module/event/subsystems/scheduler/scheduler.o 00:05:52.433 CC module/event/subsystems/fsdev/fsdev.o 00:05:52.433 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:52.433 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:52.433 LIB libspdk_event_keyring.a 00:05:52.433 LIB libspdk_event_vhost_blk.a 00:05:52.433 LIB libspdk_event_vmd.a 00:05:52.433 LIB libspdk_event_vfu_tgt.a 00:05:52.433 LIB libspdk_event_sock.a 00:05:52.433 SO libspdk_event_keyring.so.1.0 00:05:52.433 SO libspdk_event_vhost_blk.so.3.0 00:05:52.433 SO libspdk_event_vfu_tgt.so.3.0 00:05:52.433 SO libspdk_event_vmd.so.6.0 00:05:52.433 SO libspdk_event_sock.so.5.0 00:05:52.433 LIB libspdk_event_fsdev.a 00:05:52.433 LIB libspdk_event_scheduler.a 00:05:52.433 LIB libspdk_event_iobuf.a 00:05:52.433 SO libspdk_event_fsdev.so.1.0 00:05:52.433 SO libspdk_event_scheduler.so.4.0 00:05:52.433 SYMLINK libspdk_event_keyring.so 00:05:52.433 SYMLINK libspdk_event_vhost_blk.so 00:05:52.433 SYMLINK libspdk_event_vfu_tgt.so 00:05:52.433 SYMLINK libspdk_event_sock.so 00:05:52.433 SO libspdk_event_iobuf.so.3.0 00:05:52.433 SYMLINK libspdk_event_vmd.so 00:05:52.692 SYMLINK libspdk_event_fsdev.so 00:05:52.692 SYMLINK libspdk_event_scheduler.so 00:05:52.692 SYMLINK libspdk_event_iobuf.so 00:05:52.692 CC module/event/subsystems/accel/accel.o 00:05:52.951 LIB libspdk_event_accel.a 00:05:52.951 SO libspdk_event_accel.so.6.0 00:05:52.951 SYMLINK libspdk_event_accel.so 00:05:53.209 CC module/event/subsystems/bdev/bdev.o 00:05:53.466 LIB libspdk_event_bdev.a 00:05:53.466 SO libspdk_event_bdev.so.6.0 00:05:53.466 SYMLINK libspdk_event_bdev.so 00:05:53.466 CC module/event/subsystems/scsi/scsi.o 00:05:53.467 CC module/event/subsystems/nbd/nbd.o 00:05:53.467 CC module/event/subsystems/ublk/ublk.o 00:05:53.467 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:53.467 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:53.725 LIB libspdk_event_nbd.a 00:05:53.725 LIB libspdk_event_ublk.a 00:05:53.725 LIB libspdk_event_scsi.a 00:05:53.725 SO libspdk_event_nbd.so.6.0 00:05:53.725 SO libspdk_event_ublk.so.3.0 00:05:53.725 SO libspdk_event_scsi.so.6.0 00:05:53.725 SYMLINK libspdk_event_ublk.so 00:05:53.725 SYMLINK libspdk_event_nbd.so 00:05:53.725 SYMLINK libspdk_event_scsi.so 00:05:53.725 LIB libspdk_event_nvmf.a 00:05:53.983 SO libspdk_event_nvmf.so.6.0 00:05:53.983 SYMLINK libspdk_event_nvmf.so 00:05:53.983 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:53.983 CC module/event/subsystems/iscsi/iscsi.o 00:05:54.241 LIB libspdk_event_vhost_scsi.a 00:05:54.241 SO libspdk_event_vhost_scsi.so.3.0 00:05:54.241 LIB libspdk_event_iscsi.a 00:05:54.241 SO libspdk_event_iscsi.so.6.0 00:05:54.241 SYMLINK libspdk_event_vhost_scsi.so 00:05:54.241 SYMLINK libspdk_event_iscsi.so 00:05:54.241 SO libspdk.so.6.0 00:05:54.241 SYMLINK libspdk.so 00:05:54.505 CXX app/trace/trace.o 00:05:54.505 CC app/trace_record/trace_record.o 00:05:54.505 CC app/spdk_nvme_discover/discovery_aer.o 00:05:54.505 TEST_HEADER include/spdk/accel.h 00:05:54.505 TEST_HEADER include/spdk/accel_module.h 00:05:54.505 CC app/spdk_nvme_identify/identify.o 00:05:54.505 TEST_HEADER include/spdk/assert.h 00:05:54.505 CC app/spdk_nvme_perf/perf.o 00:05:54.505 TEST_HEADER include/spdk/barrier.h 00:05:54.505 CC app/spdk_top/spdk_top.o 00:05:54.505 CC test/rpc_client/rpc_client_test.o 00:05:54.505 TEST_HEADER include/spdk/base64.h 00:05:54.505 TEST_HEADER include/spdk/bdev.h 00:05:54.505 TEST_HEADER include/spdk/bdev_module.h 00:05:54.505 TEST_HEADER include/spdk/bdev_zone.h 00:05:54.505 TEST_HEADER include/spdk/bit_array.h 00:05:54.505 TEST_HEADER include/spdk/bit_pool.h 00:05:54.505 TEST_HEADER include/spdk/blob_bdev.h 00:05:54.505 CC app/spdk_lspci/spdk_lspci.o 00:05:54.505 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:54.505 TEST_HEADER include/spdk/blobfs.h 00:05:54.505 TEST_HEADER include/spdk/blob.h 00:05:54.505 TEST_HEADER include/spdk/conf.h 00:05:54.505 TEST_HEADER include/spdk/config.h 00:05:54.505 TEST_HEADER include/spdk/cpuset.h 00:05:54.505 TEST_HEADER include/spdk/crc16.h 00:05:54.505 TEST_HEADER include/spdk/crc32.h 00:05:54.505 TEST_HEADER include/spdk/crc64.h 00:05:54.505 TEST_HEADER include/spdk/dif.h 00:05:54.505 TEST_HEADER include/spdk/dma.h 00:05:54.505 TEST_HEADER include/spdk/endian.h 00:05:54.505 TEST_HEADER include/spdk/env_dpdk.h 00:05:54.505 TEST_HEADER include/spdk/env.h 00:05:54.505 TEST_HEADER include/spdk/fd_group.h 00:05:54.505 TEST_HEADER include/spdk/event.h 00:05:54.505 TEST_HEADER include/spdk/fd.h 00:05:54.505 TEST_HEADER include/spdk/file.h 00:05:54.505 TEST_HEADER include/spdk/fsdev.h 00:05:54.505 TEST_HEADER include/spdk/fsdev_module.h 00:05:54.505 TEST_HEADER include/spdk/ftl.h 00:05:54.505 TEST_HEADER include/spdk/gpt_spec.h 00:05:54.505 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:54.505 TEST_HEADER include/spdk/hexlify.h 00:05:54.505 TEST_HEADER include/spdk/histogram_data.h 00:05:54.505 TEST_HEADER include/spdk/idxd_spec.h 00:05:54.505 TEST_HEADER include/spdk/idxd.h 00:05:54.505 TEST_HEADER include/spdk/init.h 00:05:54.505 TEST_HEADER include/spdk/ioat.h 00:05:54.505 TEST_HEADER include/spdk/ioat_spec.h 00:05:54.505 TEST_HEADER include/spdk/iscsi_spec.h 00:05:54.505 TEST_HEADER include/spdk/json.h 00:05:54.505 TEST_HEADER include/spdk/jsonrpc.h 00:05:54.505 TEST_HEADER include/spdk/keyring_module.h 00:05:54.505 TEST_HEADER include/spdk/keyring.h 00:05:54.505 TEST_HEADER include/spdk/likely.h 00:05:54.505 TEST_HEADER include/spdk/log.h 00:05:54.505 TEST_HEADER include/spdk/lvol.h 00:05:54.505 TEST_HEADER include/spdk/md5.h 00:05:54.505 TEST_HEADER include/spdk/memory.h 00:05:54.505 TEST_HEADER include/spdk/mmio.h 00:05:54.505 TEST_HEADER include/spdk/nbd.h 00:05:54.505 TEST_HEADER include/spdk/notify.h 00:05:54.505 TEST_HEADER include/spdk/net.h 00:05:54.505 TEST_HEADER include/spdk/nvme.h 00:05:54.505 TEST_HEADER include/spdk/nvme_intel.h 00:05:54.505 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:54.505 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:54.505 TEST_HEADER include/spdk/nvme_spec.h 00:05:54.505 TEST_HEADER include/spdk/nvme_zns.h 00:05:54.505 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:54.505 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:54.505 TEST_HEADER include/spdk/nvmf.h 00:05:54.505 TEST_HEADER include/spdk/nvmf_spec.h 00:05:54.505 TEST_HEADER include/spdk/nvmf_transport.h 00:05:54.505 TEST_HEADER include/spdk/opal.h 00:05:54.505 TEST_HEADER include/spdk/opal_spec.h 00:05:54.505 TEST_HEADER include/spdk/pci_ids.h 00:05:54.505 TEST_HEADER include/spdk/pipe.h 00:05:54.505 TEST_HEADER include/spdk/queue.h 00:05:54.505 TEST_HEADER include/spdk/rpc.h 00:05:54.505 TEST_HEADER include/spdk/reduce.h 00:05:54.505 TEST_HEADER include/spdk/scheduler.h 00:05:54.505 TEST_HEADER include/spdk/scsi.h 00:05:54.505 TEST_HEADER include/spdk/scsi_spec.h 00:05:54.505 TEST_HEADER include/spdk/sock.h 00:05:54.505 TEST_HEADER include/spdk/stdinc.h 00:05:54.505 TEST_HEADER include/spdk/string.h 00:05:54.505 TEST_HEADER include/spdk/thread.h 00:05:54.505 TEST_HEADER include/spdk/trace_parser.h 00:05:54.505 TEST_HEADER include/spdk/trace.h 00:05:54.505 TEST_HEADER include/spdk/ublk.h 00:05:54.505 TEST_HEADER include/spdk/tree.h 00:05:54.505 TEST_HEADER include/spdk/util.h 00:05:54.505 TEST_HEADER include/spdk/uuid.h 00:05:54.505 TEST_HEADER include/spdk/version.h 00:05:54.505 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:54.505 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:54.505 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:54.505 TEST_HEADER include/spdk/vhost.h 00:05:54.505 TEST_HEADER include/spdk/vmd.h 00:05:54.505 TEST_HEADER include/spdk/xor.h 00:05:54.505 TEST_HEADER include/spdk/zipf.h 00:05:54.505 CXX test/cpp_headers/accel.o 00:05:54.505 CXX test/cpp_headers/accel_module.o 00:05:54.505 CXX test/cpp_headers/assert.o 00:05:54.505 CXX test/cpp_headers/barrier.o 00:05:54.505 CXX test/cpp_headers/base64.o 00:05:54.505 CXX test/cpp_headers/bdev.o 00:05:54.505 CXX test/cpp_headers/bdev_module.o 00:05:54.505 CXX test/cpp_headers/bdev_zone.o 00:05:54.505 CXX test/cpp_headers/bit_array.o 00:05:54.505 CXX test/cpp_headers/bit_pool.o 00:05:54.505 CXX test/cpp_headers/blob_bdev.o 00:05:54.505 CXX test/cpp_headers/blobfs.o 00:05:54.505 CXX test/cpp_headers/blobfs_bdev.o 00:05:54.505 CXX test/cpp_headers/blob.o 00:05:54.505 CXX test/cpp_headers/conf.o 00:05:54.505 CXX test/cpp_headers/config.o 00:05:54.505 CC app/spdk_dd/spdk_dd.o 00:05:54.505 CXX test/cpp_headers/cpuset.o 00:05:54.506 CC app/nvmf_tgt/nvmf_main.o 00:05:54.506 CXX test/cpp_headers/crc16.o 00:05:54.765 CC app/iscsi_tgt/iscsi_tgt.o 00:05:54.765 CXX test/cpp_headers/crc32.o 00:05:54.765 CC app/spdk_tgt/spdk_tgt.o 00:05:54.765 CC examples/ioat/verify/verify.o 00:05:54.765 CC examples/util/zipf/zipf.o 00:05:54.765 CC test/env/memory/memory_ut.o 00:05:54.765 CC examples/ioat/perf/perf.o 00:05:54.765 CC test/thread/poller_perf/poller_perf.o 00:05:54.765 CC test/env/pci/pci_ut.o 00:05:54.765 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:54.765 CC test/app/jsoncat/jsoncat.o 00:05:54.765 CC app/fio/nvme/fio_plugin.o 00:05:54.765 CC test/app/histogram_perf/histogram_perf.o 00:05:54.765 CC test/env/vtophys/vtophys.o 00:05:54.765 CC test/app/stub/stub.o 00:05:54.765 CC test/dma/test_dma/test_dma.o 00:05:54.765 CC app/fio/bdev/fio_plugin.o 00:05:54.765 CC test/app/bdev_svc/bdev_svc.o 00:05:54.765 LINK spdk_lspci 00:05:55.026 LINK rpc_client_test 00:05:55.026 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:55.026 CC test/env/mem_callbacks/mem_callbacks.o 00:05:55.026 LINK spdk_nvme_discover 00:05:55.026 LINK poller_perf 00:05:55.026 LINK jsoncat 00:05:55.026 LINK histogram_perf 00:05:55.026 CXX test/cpp_headers/crc64.o 00:05:55.026 CXX test/cpp_headers/dif.o 00:05:55.026 LINK zipf 00:05:55.026 LINK vtophys 00:05:55.026 LINK nvmf_tgt 00:05:55.026 LINK env_dpdk_post_init 00:05:55.026 LINK interrupt_tgt 00:05:55.026 CXX test/cpp_headers/dma.o 00:05:55.026 CXX test/cpp_headers/endian.o 00:05:55.026 CXX test/cpp_headers/env_dpdk.o 00:05:55.026 CXX test/cpp_headers/env.o 00:05:55.026 CXX test/cpp_headers/event.o 00:05:55.026 CXX test/cpp_headers/fd_group.o 00:05:55.026 CXX test/cpp_headers/fd.o 00:05:55.026 CXX test/cpp_headers/file.o 00:05:55.027 CXX test/cpp_headers/fsdev.o 00:05:55.027 CXX test/cpp_headers/fsdev_module.o 00:05:55.027 LINK spdk_trace_record 00:05:55.290 LINK stub 00:05:55.290 CXX test/cpp_headers/ftl.o 00:05:55.290 CXX test/cpp_headers/fuse_dispatcher.o 00:05:55.290 LINK ioat_perf 00:05:55.290 LINK verify 00:05:55.290 CXX test/cpp_headers/gpt_spec.o 00:05:55.290 LINK bdev_svc 00:05:55.290 LINK iscsi_tgt 00:05:55.290 CXX test/cpp_headers/hexlify.o 00:05:55.290 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:55.290 CXX test/cpp_headers/histogram_data.o 00:05:55.290 LINK spdk_tgt 00:05:55.290 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:55.290 CXX test/cpp_headers/idxd_spec.o 00:05:55.290 CXX test/cpp_headers/idxd.o 00:05:55.290 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:55.290 CXX test/cpp_headers/init.o 00:05:55.558 CXX test/cpp_headers/ioat.o 00:05:55.558 LINK spdk_dd 00:05:55.558 CXX test/cpp_headers/ioat_spec.o 00:05:55.558 CXX test/cpp_headers/iscsi_spec.o 00:05:55.558 CXX test/cpp_headers/json.o 00:05:55.558 LINK spdk_trace 00:05:55.558 CXX test/cpp_headers/jsonrpc.o 00:05:55.558 CXX test/cpp_headers/keyring.o 00:05:55.558 CXX test/cpp_headers/keyring_module.o 00:05:55.558 LINK pci_ut 00:05:55.558 CXX test/cpp_headers/likely.o 00:05:55.558 CXX test/cpp_headers/log.o 00:05:55.558 CXX test/cpp_headers/lvol.o 00:05:55.558 CXX test/cpp_headers/md5.o 00:05:55.558 CXX test/cpp_headers/memory.o 00:05:55.558 CXX test/cpp_headers/mmio.o 00:05:55.558 CXX test/cpp_headers/nbd.o 00:05:55.558 CXX test/cpp_headers/net.o 00:05:55.558 CXX test/cpp_headers/notify.o 00:05:55.558 CXX test/cpp_headers/nvme.o 00:05:55.558 CXX test/cpp_headers/nvme_intel.o 00:05:55.558 CXX test/cpp_headers/nvme_ocssd.o 00:05:55.558 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:55.558 CXX test/cpp_headers/nvme_spec.o 00:05:55.822 CXX test/cpp_headers/nvme_zns.o 00:05:55.822 CXX test/cpp_headers/nvmf_cmd.o 00:05:55.822 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:55.822 CC test/event/event_perf/event_perf.o 00:05:55.822 CC test/event/reactor_perf/reactor_perf.o 00:05:55.822 CC test/event/reactor/reactor.o 00:05:55.822 CXX test/cpp_headers/nvmf.o 00:05:55.822 CC test/event/app_repeat/app_repeat.o 00:05:55.822 CXX test/cpp_headers/nvmf_spec.o 00:05:55.822 LINK spdk_bdev 00:05:55.822 LINK nvme_fuzz 00:05:55.822 CXX test/cpp_headers/nvmf_transport.o 00:05:55.822 CXX test/cpp_headers/opal.o 00:05:55.822 CC examples/sock/hello_world/hello_sock.o 00:05:55.822 LINK spdk_nvme 00:05:55.822 CXX test/cpp_headers/opal_spec.o 00:05:55.822 CC examples/vmd/lsvmd/lsvmd.o 00:05:55.822 CC examples/vmd/led/led.o 00:05:55.822 LINK test_dma 00:05:55.822 CXX test/cpp_headers/pci_ids.o 00:05:55.822 CC examples/idxd/perf/perf.o 00:05:55.822 CC test/event/scheduler/scheduler.o 00:05:55.822 CC examples/thread/thread/thread_ex.o 00:05:56.084 CXX test/cpp_headers/pipe.o 00:05:56.084 CXX test/cpp_headers/queue.o 00:05:56.084 CXX test/cpp_headers/reduce.o 00:05:56.084 CXX test/cpp_headers/rpc.o 00:05:56.084 CXX test/cpp_headers/scheduler.o 00:05:56.084 CXX test/cpp_headers/scsi.o 00:05:56.084 CXX test/cpp_headers/scsi_spec.o 00:05:56.084 CXX test/cpp_headers/sock.o 00:05:56.084 CXX test/cpp_headers/stdinc.o 00:05:56.084 CXX test/cpp_headers/string.o 00:05:56.084 CXX test/cpp_headers/thread.o 00:05:56.084 CXX test/cpp_headers/trace.o 00:05:56.084 CXX test/cpp_headers/trace_parser.o 00:05:56.084 CXX test/cpp_headers/tree.o 00:05:56.084 CXX test/cpp_headers/ublk.o 00:05:56.084 LINK event_perf 00:05:56.084 LINK reactor 00:05:56.084 CXX test/cpp_headers/util.o 00:05:56.084 LINK reactor_perf 00:05:56.084 CXX test/cpp_headers/uuid.o 00:05:56.084 CXX test/cpp_headers/version.o 00:05:56.084 CXX test/cpp_headers/vfio_user_pci.o 00:05:56.084 CXX test/cpp_headers/vfio_user_spec.o 00:05:56.084 LINK app_repeat 00:05:56.084 CXX test/cpp_headers/vhost.o 00:05:56.084 CXX test/cpp_headers/vmd.o 00:05:56.084 CC app/vhost/vhost.o 00:05:56.084 CXX test/cpp_headers/xor.o 00:05:56.084 LINK lsvmd 00:05:56.084 LINK spdk_nvme_perf 00:05:56.348 CXX test/cpp_headers/zipf.o 00:05:56.348 LINK vhost_fuzz 00:05:56.348 LINK led 00:05:56.348 LINK mem_callbacks 00:05:56.348 LINK spdk_nvme_identify 00:05:56.348 LINK spdk_top 00:05:56.348 LINK hello_sock 00:05:56.348 LINK scheduler 00:05:56.348 LINK thread 00:05:56.609 CC test/nvme/err_injection/err_injection.o 00:05:56.609 CC test/nvme/simple_copy/simple_copy.o 00:05:56.609 LINK vhost 00:05:56.609 CC test/nvme/aer/aer.o 00:05:56.609 CC test/nvme/e2edp/nvme_dp.o 00:05:56.609 CC test/nvme/reset/reset.o 00:05:56.609 CC test/nvme/overhead/overhead.o 00:05:56.609 CC test/nvme/startup/startup.o 00:05:56.609 CC test/nvme/reserve/reserve.o 00:05:56.609 CC test/nvme/connect_stress/connect_stress.o 00:05:56.609 CC test/nvme/sgl/sgl.o 00:05:56.609 CC test/nvme/boot_partition/boot_partition.o 00:05:56.609 CC test/nvme/fused_ordering/fused_ordering.o 00:05:56.609 CC test/nvme/fdp/fdp.o 00:05:56.609 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:56.609 CC test/nvme/compliance/nvme_compliance.o 00:05:56.609 CC test/nvme/cuse/cuse.o 00:05:56.609 LINK idxd_perf 00:05:56.609 CC test/accel/dif/dif.o 00:05:56.609 CC test/blobfs/mkfs/mkfs.o 00:05:56.609 CC test/lvol/esnap/esnap.o 00:05:56.868 LINK err_injection 00:05:56.868 LINK boot_partition 00:05:56.868 LINK doorbell_aers 00:05:56.868 LINK fused_ordering 00:05:56.868 CC examples/nvme/hello_world/hello_world.o 00:05:56.868 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:56.868 CC examples/nvme/hotplug/hotplug.o 00:05:56.868 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:56.868 CC examples/nvme/arbitration/arbitration.o 00:05:56.868 CC examples/nvme/abort/abort.o 00:05:56.868 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:56.868 LINK reserve 00:05:56.868 CC examples/nvme/reconnect/reconnect.o 00:05:56.868 LINK startup 00:05:56.868 LINK reset 00:05:56.868 LINK connect_stress 00:05:56.868 LINK mkfs 00:05:56.868 LINK aer 00:05:56.868 CC examples/accel/perf/accel_perf.o 00:05:56.868 LINK sgl 00:05:56.868 LINK simple_copy 00:05:56.868 LINK memory_ut 00:05:56.868 LINK nvme_compliance 00:05:57.128 CC examples/blob/hello_world/hello_blob.o 00:05:57.128 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:57.128 CC examples/blob/cli/blobcli.o 00:05:57.128 LINK fdp 00:05:57.128 LINK overhead 00:05:57.128 LINK nvme_dp 00:05:57.128 LINK cmb_copy 00:05:57.128 LINK hello_world 00:05:57.128 LINK pmr_persistence 00:05:57.387 LINK arbitration 00:05:57.388 LINK hotplug 00:05:57.388 LINK hello_blob 00:05:57.388 LINK reconnect 00:05:57.388 LINK hello_fsdev 00:05:57.388 LINK abort 00:05:57.388 LINK dif 00:05:57.646 LINK nvme_manage 00:05:57.646 LINK accel_perf 00:05:57.646 LINK blobcli 00:05:57.905 LINK iscsi_fuzz 00:05:57.906 CC test/bdev/bdevio/bdevio.o 00:05:57.906 CC examples/bdev/hello_world/hello_bdev.o 00:05:57.906 CC examples/bdev/bdevperf/bdevperf.o 00:05:58.165 LINK hello_bdev 00:05:58.165 LINK cuse 00:05:58.165 LINK bdevio 00:05:58.732 LINK bdevperf 00:05:58.991 CC examples/nvmf/nvmf/nvmf.o 00:05:59.558 LINK nvmf 00:06:02.095 LINK esnap 00:06:02.354 00:06:02.354 real 1m10.227s 00:06:02.354 user 11m56.203s 00:06:02.354 sys 2m37.989s 00:06:02.354 08:42:15 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:02.354 08:42:15 make -- common/autotest_common.sh@10 -- $ set +x 00:06:02.354 ************************************ 00:06:02.354 END TEST make 00:06:02.354 ************************************ 00:06:02.354 08:42:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:02.354 08:42:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:02.354 08:42:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:02.354 08:42:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.354 08:42:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:02.354 08:42:15 -- pm/common@44 -- $ pid=628563 00:06:02.354 08:42:15 -- pm/common@50 -- $ kill -TERM 628563 00:06:02.354 08:42:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.354 08:42:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:02.354 08:42:15 -- pm/common@44 -- $ pid=628565 00:06:02.354 08:42:15 -- pm/common@50 -- $ kill -TERM 628565 00:06:02.354 08:42:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.354 08:42:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:02.354 08:42:15 -- pm/common@44 -- $ pid=628567 00:06:02.354 08:42:15 -- pm/common@50 -- $ kill -TERM 628567 00:06:02.354 08:42:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.354 08:42:15 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:02.354 08:42:15 -- pm/common@44 -- $ pid=628594 00:06:02.354 08:42:15 -- pm/common@50 -- $ sudo -E kill -TERM 628594 00:06:02.354 08:42:15 -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:02.354 08:42:15 -- common/autotest_common.sh@1689 -- # lcov --version 00:06:02.354 08:42:15 -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:02.613 08:42:15 -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:02.613 08:42:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.613 08:42:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.613 08:42:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.613 08:42:15 -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.613 08:42:15 -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.613 08:42:15 -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.613 08:42:15 -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.613 08:42:15 -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.613 08:42:15 -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.613 08:42:15 -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.613 08:42:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.613 08:42:15 -- scripts/common.sh@344 -- # case "$op" in 00:06:02.613 08:42:15 -- scripts/common.sh@345 -- # : 1 00:06:02.613 08:42:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.613 08:42:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.613 08:42:15 -- scripts/common.sh@365 -- # decimal 1 00:06:02.613 08:42:15 -- scripts/common.sh@353 -- # local d=1 00:06:02.613 08:42:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.613 08:42:15 -- scripts/common.sh@355 -- # echo 1 00:06:02.613 08:42:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.613 08:42:15 -- scripts/common.sh@366 -- # decimal 2 00:06:02.613 08:42:15 -- scripts/common.sh@353 -- # local d=2 00:06:02.613 08:42:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.613 08:42:15 -- scripts/common.sh@355 -- # echo 2 00:06:02.613 08:42:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.613 08:42:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.613 08:42:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.613 08:42:15 -- scripts/common.sh@368 -- # return 0 00:06:02.613 08:42:15 -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.613 08:42:15 -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:02.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.613 --rc genhtml_branch_coverage=1 00:06:02.613 --rc genhtml_function_coverage=1 00:06:02.613 --rc genhtml_legend=1 00:06:02.613 --rc geninfo_all_blocks=1 00:06:02.613 --rc geninfo_unexecuted_blocks=1 00:06:02.613 00:06:02.613 ' 00:06:02.613 08:42:15 -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:02.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.613 --rc genhtml_branch_coverage=1 00:06:02.613 --rc genhtml_function_coverage=1 00:06:02.613 --rc genhtml_legend=1 00:06:02.613 --rc geninfo_all_blocks=1 00:06:02.613 --rc geninfo_unexecuted_blocks=1 00:06:02.613 00:06:02.613 ' 00:06:02.613 08:42:15 -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:02.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.613 --rc genhtml_branch_coverage=1 00:06:02.613 --rc genhtml_function_coverage=1 00:06:02.613 --rc genhtml_legend=1 00:06:02.613 --rc geninfo_all_blocks=1 00:06:02.613 --rc geninfo_unexecuted_blocks=1 00:06:02.613 00:06:02.613 ' 00:06:02.613 08:42:15 -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:02.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.613 --rc genhtml_branch_coverage=1 00:06:02.613 --rc genhtml_function_coverage=1 00:06:02.613 --rc genhtml_legend=1 00:06:02.613 --rc geninfo_all_blocks=1 00:06:02.613 --rc geninfo_unexecuted_blocks=1 00:06:02.613 00:06:02.613 ' 00:06:02.613 08:42:15 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.613 08:42:15 -- nvmf/common.sh@7 -- # uname -s 00:06:02.614 08:42:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.614 08:42:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.614 08:42:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.614 08:42:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.614 08:42:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.614 08:42:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.614 08:42:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.614 08:42:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.614 08:42:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.614 08:42:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.614 08:42:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:02.614 08:42:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:02.614 08:42:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.614 08:42:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.614 08:42:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:02.614 08:42:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.614 08:42:15 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.614 08:42:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.614 08:42:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.614 08:42:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.614 08:42:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.614 08:42:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.614 08:42:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.614 08:42:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.614 08:42:15 -- paths/export.sh@5 -- # export PATH 00:06:02.614 08:42:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.614 08:42:15 -- nvmf/common.sh@51 -- # : 0 00:06:02.614 08:42:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.614 08:42:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.614 08:42:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.614 08:42:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.614 08:42:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.614 08:42:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.614 08:42:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.614 08:42:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.614 08:42:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.614 08:42:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:02.614 08:42:15 -- spdk/autotest.sh@32 -- # uname -s 00:06:02.614 08:42:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:02.614 08:42:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:02.614 08:42:15 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:02.614 08:42:15 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:02.614 08:42:15 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:02.614 08:42:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:02.614 08:42:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:02.614 08:42:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:02.614 08:42:15 -- spdk/autotest.sh@48 -- # udevadm_pid=688602 00:06:02.614 08:42:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:02.614 08:42:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:02.614 08:42:15 -- pm/common@17 -- # local monitor 00:06:02.614 08:42:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.614 08:42:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.614 08:42:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.614 08:42:15 -- pm/common@21 -- # date +%s 00:06:02.614 08:42:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:02.614 08:42:15 -- pm/common@21 -- # date +%s 00:06:02.614 08:42:15 -- pm/common@25 -- # sleep 1 00:06:02.614 08:42:15 -- pm/common@21 -- # date +%s 00:06:02.614 08:42:15 -- pm/common@21 -- # date +%s 00:06:02.614 08:42:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730878935 00:06:02.614 08:42:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730878935 00:06:02.614 08:42:15 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730878935 00:06:02.614 08:42:15 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730878935 00:06:02.614 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730878935_collect-cpu-load.pm.log 00:06:02.614 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730878935_collect-vmstat.pm.log 00:06:02.614 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730878935_collect-cpu-temp.pm.log 00:06:02.614 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730878935_collect-bmc-pm.bmc.pm.log 00:06:03.552 08:42:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:03.552 08:42:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:03.552 08:42:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.552 08:42:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.552 08:42:16 -- spdk/autotest.sh@59 -- # create_test_list 00:06:03.552 08:42:16 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:03.552 08:42:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.552 08:42:16 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:03.552 08:42:16 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.552 08:42:16 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.552 08:42:16 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:03.552 08:42:16 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:03.552 08:42:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:03.552 08:42:16 -- common/autotest_common.sh@1453 -- # uname 00:06:03.552 08:42:16 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:06:03.552 08:42:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:03.552 08:42:16 -- common/autotest_common.sh@1473 -- # uname 00:06:03.552 08:42:16 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:06:03.552 08:42:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:03.552 08:42:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:03.552 lcov: LCOV version 1.15 00:06:03.552 08:42:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:21.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:21.652 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:43.596 08:42:53 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:43.596 08:42:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:43.596 08:42:53 -- common/autotest_common.sh@10 -- # set +x 00:06:43.596 08:42:53 -- spdk/autotest.sh@78 -- # rm -f 00:06:43.596 08:42:53 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:43.596 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:06:43.596 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:06:43.596 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:06:43.596 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:06:43.596 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:06:43.596 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:06:43.596 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:06:43.596 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:06:43.596 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:06:43.596 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:06:43.596 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:06:43.596 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:06:43.596 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:06:43.596 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:06:43.596 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:06:43.596 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:06:43.596 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:06:43.596 08:42:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:43.596 08:42:54 -- common/autotest_common.sh@1653 -- # zoned_devs=() 00:06:43.596 08:42:54 -- common/autotest_common.sh@1653 -- # local -gA zoned_devs 00:06:43.596 08:42:54 -- common/autotest_common.sh@1654 -- # local nvme bdf 00:06:43.596 08:42:54 -- common/autotest_common.sh@1656 -- # for nvme in /sys/block/nvme* 00:06:43.596 08:42:54 -- common/autotest_common.sh@1657 -- # is_block_zoned nvme0n1 00:06:43.596 08:42:54 -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:06:43.596 08:42:54 -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:43.596 08:42:54 -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:06:43.596 08:42:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:43.596 08:42:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:43.596 08:42:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:43.596 08:42:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:43.596 08:42:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:43.597 08:42:54 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:43.597 No valid GPT data, bailing 00:06:43.597 08:42:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:43.597 08:42:55 -- scripts/common.sh@394 -- # pt= 00:06:43.597 08:42:55 -- scripts/common.sh@395 -- # return 1 00:06:43.597 08:42:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:43.597 1+0 records in 00:06:43.597 1+0 records out 00:06:43.597 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00179112 s, 585 MB/s 00:06:43.597 08:42:55 -- spdk/autotest.sh@105 -- # sync 00:06:43.597 08:42:55 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:43.597 08:42:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:43.597 08:42:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:44.166 08:42:57 -- spdk/autotest.sh@111 -- # uname -s 00:06:44.166 08:42:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:44.166 08:42:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:44.166 08:42:57 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:45.611 Hugepages 00:06:45.611 node hugesize free / total 00:06:45.611 node0 1048576kB 0 / 0 00:06:45.611 node0 2048kB 0 / 0 00:06:45.611 node1 1048576kB 0 / 0 00:06:45.611 node1 2048kB 0 / 0 00:06:45.611 00:06:45.611 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:45.611 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:45.611 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:45.611 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:45.611 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:45.611 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:45.611 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:45.611 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:45.611 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:45.611 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:45.611 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:45.611 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:45.611 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:45.611 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:45.611 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:45.611 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:45.611 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:45.611 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:45.611 08:42:58 -- spdk/autotest.sh@117 -- # uname -s 00:06:45.611 08:42:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:45.611 08:42:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:45.611 08:42:58 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:46.612 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:46.612 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:46.612 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:46.612 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:46.612 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:46.872 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:46.872 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:46.872 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:46.872 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:46.872 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:46.872 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:46.872 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:46.872 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:46.872 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:46.872 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:46.872 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:47.812 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:47.812 08:43:01 -- common/autotest_common.sh@1513 -- # sleep 1 00:06:49.194 08:43:02 -- common/autotest_common.sh@1514 -- # bdfs=() 00:06:49.194 08:43:02 -- common/autotest_common.sh@1514 -- # local bdfs 00:06:49.194 08:43:02 -- common/autotest_common.sh@1516 -- # bdfs=($(get_nvme_bdfs)) 00:06:49.194 08:43:02 -- common/autotest_common.sh@1516 -- # get_nvme_bdfs 00:06:49.194 08:43:02 -- common/autotest_common.sh@1494 -- # bdfs=() 00:06:49.194 08:43:02 -- common/autotest_common.sh@1494 -- # local bdfs 00:06:49.194 08:43:02 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:49.194 08:43:02 -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:49.194 08:43:02 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:06:49.194 08:43:02 -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:06:49.194 08:43:02 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:0b:00.0 00:06:49.194 08:43:02 -- common/autotest_common.sh@1518 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:50.130 Waiting for block devices as requested 00:06:50.130 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:50.390 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:50.390 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:50.390 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:50.390 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:50.650 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:50.650 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:50.650 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:50.909 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:06:50.909 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:50.909 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:51.168 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:51.168 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:51.168 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:51.168 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:51.426 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:51.426 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:51.426 08:43:04 -- common/autotest_common.sh@1520 -- # for bdf in "${bdfs[@]}" 00:06:51.426 08:43:04 -- common/autotest_common.sh@1521 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:06:51.426 08:43:04 -- common/autotest_common.sh@1483 -- # readlink -f /sys/class/nvme/nvme0 00:06:51.426 08:43:04 -- common/autotest_common.sh@1483 -- # grep 0000:0b:00.0/nvme/nvme 00:06:51.685 08:43:04 -- common/autotest_common.sh@1483 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:51.685 08:43:04 -- common/autotest_common.sh@1484 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:06:51.685 08:43:04 -- common/autotest_common.sh@1488 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:51.685 08:43:04 -- common/autotest_common.sh@1488 -- # printf '%s\n' nvme0 00:06:51.685 08:43:04 -- common/autotest_common.sh@1521 -- # nvme_ctrlr=/dev/nvme0 00:06:51.685 08:43:04 -- common/autotest_common.sh@1522 -- # [[ -z /dev/nvme0 ]] 00:06:51.685 08:43:04 -- common/autotest_common.sh@1527 -- # nvme id-ctrl /dev/nvme0 00:06:51.685 08:43:04 -- common/autotest_common.sh@1527 -- # grep oacs 00:06:51.685 08:43:04 -- common/autotest_common.sh@1527 -- # cut -d: -f2 00:06:51.685 08:43:04 -- common/autotest_common.sh@1527 -- # oacs=' 0xf' 00:06:51.685 08:43:04 -- common/autotest_common.sh@1528 -- # oacs_ns_manage=8 00:06:51.685 08:43:04 -- common/autotest_common.sh@1530 -- # [[ 8 -ne 0 ]] 00:06:51.685 08:43:04 -- common/autotest_common.sh@1536 -- # nvme id-ctrl /dev/nvme0 00:06:51.685 08:43:04 -- common/autotest_common.sh@1536 -- # grep unvmcap 00:06:51.685 08:43:04 -- common/autotest_common.sh@1536 -- # cut -d: -f2 00:06:51.685 08:43:04 -- common/autotest_common.sh@1536 -- # unvmcap=' 0' 00:06:51.685 08:43:04 -- common/autotest_common.sh@1537 -- # [[ 0 -eq 0 ]] 00:06:51.685 08:43:04 -- common/autotest_common.sh@1539 -- # continue 00:06:51.685 08:43:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:51.685 08:43:04 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:51.685 08:43:04 -- common/autotest_common.sh@10 -- # set +x 00:06:51.685 08:43:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:51.685 08:43:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:51.685 08:43:04 -- common/autotest_common.sh@10 -- # set +x 00:06:51.685 08:43:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:53.062 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:53.062 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:53.062 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:53.062 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:53.062 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:53.062 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:53.062 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:53.062 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:53.062 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:53.062 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:53.062 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:53.062 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:53.062 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:53.062 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:53.062 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:53.062 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:54.006 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:54.006 08:43:07 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:54.006 08:43:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:54.006 08:43:07 -- common/autotest_common.sh@10 -- # set +x 00:06:54.006 08:43:07 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:54.006 08:43:07 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:06:54.006 08:43:07 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:06:54.006 08:43:07 -- common/autotest_common.sh@1559 -- # bdfs=() 00:06:54.006 08:43:07 -- common/autotest_common.sh@1559 -- # _bdfs=() 00:06:54.006 08:43:07 -- common/autotest_common.sh@1559 -- # local bdfs _bdfs 00:06:54.006 08:43:07 -- common/autotest_common.sh@1560 -- # _bdfs=($(get_nvme_bdfs)) 00:06:54.006 08:43:07 -- common/autotest_common.sh@1560 -- # get_nvme_bdfs 00:06:54.006 08:43:07 -- common/autotest_common.sh@1494 -- # bdfs=() 00:06:54.006 08:43:07 -- common/autotest_common.sh@1494 -- # local bdfs 00:06:54.006 08:43:07 -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:54.006 08:43:07 -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:54.006 08:43:07 -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:06:54.265 08:43:07 -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:06:54.265 08:43:07 -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:0b:00.0 00:06:54.265 08:43:07 -- common/autotest_common.sh@1561 -- # for bdf in "${_bdfs[@]}" 00:06:54.265 08:43:07 -- common/autotest_common.sh@1562 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:06:54.265 08:43:07 -- common/autotest_common.sh@1562 -- # device=0x0a54 00:06:54.265 08:43:07 -- common/autotest_common.sh@1563 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:54.265 08:43:07 -- common/autotest_common.sh@1564 -- # bdfs+=($bdf) 00:06:54.265 08:43:07 -- common/autotest_common.sh@1568 -- # (( 1 > 0 )) 00:06:54.265 08:43:07 -- common/autotest_common.sh@1569 -- # printf '%s\n' 0000:0b:00.0 00:06:54.265 08:43:07 -- common/autotest_common.sh@1575 -- # [[ -z 0000:0b:00.0 ]] 00:06:54.265 08:43:07 -- common/autotest_common.sh@1580 -- # spdk_tgt_pid=699014 00:06:54.265 08:43:07 -- common/autotest_common.sh@1579 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:54.265 08:43:07 -- common/autotest_common.sh@1581 -- # waitforlisten 699014 00:06:54.265 08:43:07 -- common/autotest_common.sh@831 -- # '[' -z 699014 ']' 00:06:54.265 08:43:07 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.265 08:43:07 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.265 08:43:07 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.265 08:43:07 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.265 08:43:07 -- common/autotest_common.sh@10 -- # set +x 00:06:54.265 [2024-11-06 08:43:07.385930] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:06:54.265 [2024-11-06 08:43:07.386011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid699014 ] 00:06:54.265 [2024-11-06 08:43:07.452195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.265 [2024-11-06 08:43:07.504717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.523 08:43:07 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.523 08:43:07 -- common/autotest_common.sh@864 -- # return 0 00:06:54.523 08:43:07 -- common/autotest_common.sh@1583 -- # bdf_id=0 00:06:54.523 08:43:07 -- common/autotest_common.sh@1584 -- # for bdf in "${bdfs[@]}" 00:06:54.523 08:43:07 -- common/autotest_common.sh@1585 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:06:57.809 nvme0n1 00:06:57.809 08:43:10 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:58.068 [2024-11-06 08:43:11.119432] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:58.068 [2024-11-06 08:43:11.119473] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:58.068 request: 00:06:58.068 { 00:06:58.068 "nvme_ctrlr_name": "nvme0", 00:06:58.068 "password": "test", 00:06:58.068 "method": "bdev_nvme_opal_revert", 00:06:58.068 "req_id": 1 00:06:58.068 } 00:06:58.068 Got JSON-RPC error response 00:06:58.068 response: 00:06:58.068 { 00:06:58.068 "code": -32603, 00:06:58.068 "message": "Internal error" 00:06:58.068 } 00:06:58.068 08:43:11 -- common/autotest_common.sh@1587 -- # true 00:06:58.068 08:43:11 -- common/autotest_common.sh@1588 -- # (( ++bdf_id )) 00:06:58.068 08:43:11 -- common/autotest_common.sh@1591 -- # killprocess 699014 00:06:58.068 08:43:11 -- common/autotest_common.sh@950 -- # '[' -z 699014 ']' 00:06:58.068 08:43:11 -- common/autotest_common.sh@954 -- # kill -0 699014 00:06:58.068 08:43:11 -- common/autotest_common.sh@955 -- # uname 00:06:58.068 08:43:11 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.068 08:43:11 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 699014 00:06:58.068 08:43:11 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.068 08:43:11 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.068 08:43:11 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 699014' 00:06:58.068 killing process with pid 699014 00:06:58.068 08:43:11 -- common/autotest_common.sh@969 -- # kill 699014 00:06:58.068 08:43:11 -- common/autotest_common.sh@974 -- # wait 699014 00:06:59.968 08:43:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:59.968 08:43:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:59.968 08:43:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:59.968 08:43:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:59.968 08:43:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:59.968 08:43:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:59.968 08:43:12 -- common/autotest_common.sh@10 -- # set +x 00:06:59.968 08:43:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:59.968 08:43:12 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:59.968 08:43:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.968 08:43:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.968 08:43:12 -- common/autotest_common.sh@10 -- # set +x 00:06:59.968 ************************************ 00:06:59.968 START TEST env 00:06:59.968 ************************************ 00:06:59.968 08:43:12 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:59.968 * Looking for test storage... 00:06:59.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:59.968 08:43:12 env -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:06:59.968 08:43:12 env -- common/autotest_common.sh@1689 -- # lcov --version 00:06:59.968 08:43:12 env -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:06:59.968 08:43:13 env -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:06:59.968 08:43:13 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.968 08:43:13 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.968 08:43:13 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.968 08:43:13 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.968 08:43:13 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.968 08:43:13 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.968 08:43:13 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.968 08:43:13 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.968 08:43:13 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.968 08:43:13 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.968 08:43:13 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.968 08:43:13 env -- scripts/common.sh@344 -- # case "$op" in 00:06:59.968 08:43:13 env -- scripts/common.sh@345 -- # : 1 00:06:59.968 08:43:13 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.968 08:43:13 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.968 08:43:13 env -- scripts/common.sh@365 -- # decimal 1 00:06:59.968 08:43:13 env -- scripts/common.sh@353 -- # local d=1 00:06:59.968 08:43:13 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.968 08:43:13 env -- scripts/common.sh@355 -- # echo 1 00:06:59.968 08:43:13 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.968 08:43:13 env -- scripts/common.sh@366 -- # decimal 2 00:06:59.968 08:43:13 env -- scripts/common.sh@353 -- # local d=2 00:06:59.968 08:43:13 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.968 08:43:13 env -- scripts/common.sh@355 -- # echo 2 00:06:59.968 08:43:13 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.968 08:43:13 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.968 08:43:13 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.968 08:43:13 env -- scripts/common.sh@368 -- # return 0 00:06:59.968 08:43:13 env -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.968 08:43:13 env -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:06:59.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.968 --rc genhtml_branch_coverage=1 00:06:59.968 --rc genhtml_function_coverage=1 00:06:59.968 --rc genhtml_legend=1 00:06:59.968 --rc geninfo_all_blocks=1 00:06:59.968 --rc geninfo_unexecuted_blocks=1 00:06:59.968 00:06:59.968 ' 00:06:59.968 08:43:13 env -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:06:59.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.968 --rc genhtml_branch_coverage=1 00:06:59.968 --rc genhtml_function_coverage=1 00:06:59.968 --rc genhtml_legend=1 00:06:59.968 --rc geninfo_all_blocks=1 00:06:59.968 --rc geninfo_unexecuted_blocks=1 00:06:59.968 00:06:59.968 ' 00:06:59.968 08:43:13 env -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:06:59.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.968 --rc genhtml_branch_coverage=1 00:06:59.968 --rc genhtml_function_coverage=1 00:06:59.968 --rc genhtml_legend=1 00:06:59.968 --rc geninfo_all_blocks=1 00:06:59.968 --rc geninfo_unexecuted_blocks=1 00:06:59.968 00:06:59.968 ' 00:06:59.968 08:43:13 env -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:06:59.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.968 --rc genhtml_branch_coverage=1 00:06:59.968 --rc genhtml_function_coverage=1 00:06:59.968 --rc genhtml_legend=1 00:06:59.968 --rc geninfo_all_blocks=1 00:06:59.968 --rc geninfo_unexecuted_blocks=1 00:06:59.968 00:06:59.968 ' 00:06:59.968 08:43:13 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:59.968 08:43:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.968 08:43:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.968 08:43:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:59.968 ************************************ 00:06:59.968 START TEST env_memory 00:06:59.968 ************************************ 00:06:59.968 08:43:13 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:59.968 00:06:59.968 00:06:59.968 CUnit - A unit testing framework for C - Version 2.1-3 00:06:59.968 http://cunit.sourceforge.net/ 00:06:59.968 00:06:59.968 00:06:59.968 Suite: memory 00:06:59.968 Test: alloc and free memory map ...[2024-11-06 08:43:13.116911] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:59.968 passed 00:06:59.968 Test: mem map translation ...[2024-11-06 08:43:13.137530] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:59.968 [2024-11-06 08:43:13.137551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:59.968 [2024-11-06 08:43:13.137608] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:59.968 [2024-11-06 08:43:13.137621] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:59.968 passed 00:06:59.968 Test: mem map registration ...[2024-11-06 08:43:13.180355] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:59.968 [2024-11-06 08:43:13.180374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:59.968 passed 00:06:59.968 Test: mem map adjacent registrations ...passed 00:06:59.968 00:06:59.968 Run Summary: Type Total Ran Passed Failed Inactive 00:06:59.968 suites 1 1 n/a 0 0 00:06:59.968 tests 4 4 4 0 0 00:06:59.968 asserts 152 152 152 0 n/a 00:06:59.968 00:06:59.968 Elapsed time = 0.145 seconds 00:06:59.968 00:06:59.968 real 0m0.154s 00:06:59.968 user 0m0.145s 00:06:59.968 sys 0m0.008s 00:06:59.968 08:43:13 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.968 08:43:13 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:59.968 ************************************ 00:06:59.968 END TEST env_memory 00:06:59.968 ************************************ 00:07:00.227 08:43:13 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:00.227 08:43:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.227 08:43:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.227 08:43:13 env -- common/autotest_common.sh@10 -- # set +x 00:07:00.227 ************************************ 00:07:00.227 START TEST env_vtophys 00:07:00.227 ************************************ 00:07:00.227 08:43:13 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:00.227 EAL: lib.eal log level changed from notice to debug 00:07:00.227 EAL: Detected lcore 0 as core 0 on socket 0 00:07:00.227 EAL: Detected lcore 1 as core 1 on socket 0 00:07:00.227 EAL: Detected lcore 2 as core 2 on socket 0 00:07:00.227 EAL: Detected lcore 3 as core 3 on socket 0 00:07:00.227 EAL: Detected lcore 4 as core 4 on socket 0 00:07:00.227 EAL: Detected lcore 5 as core 5 on socket 0 00:07:00.227 EAL: Detected lcore 6 as core 8 on socket 0 00:07:00.227 EAL: Detected lcore 7 as core 9 on socket 0 00:07:00.227 EAL: Detected lcore 8 as core 10 on socket 0 00:07:00.227 EAL: Detected lcore 9 as core 11 on socket 0 00:07:00.227 EAL: Detected lcore 10 as core 12 on socket 0 00:07:00.227 EAL: Detected lcore 11 as core 13 on socket 0 00:07:00.227 EAL: Detected lcore 12 as core 0 on socket 1 00:07:00.227 EAL: Detected lcore 13 as core 1 on socket 1 00:07:00.227 EAL: Detected lcore 14 as core 2 on socket 1 00:07:00.227 EAL: Detected lcore 15 as core 3 on socket 1 00:07:00.227 EAL: Detected lcore 16 as core 4 on socket 1 00:07:00.227 EAL: Detected lcore 17 as core 5 on socket 1 00:07:00.227 EAL: Detected lcore 18 as core 8 on socket 1 00:07:00.227 EAL: Detected lcore 19 as core 9 on socket 1 00:07:00.227 EAL: Detected lcore 20 as core 10 on socket 1 00:07:00.227 EAL: Detected lcore 21 as core 11 on socket 1 00:07:00.227 EAL: Detected lcore 22 as core 12 on socket 1 00:07:00.227 EAL: Detected lcore 23 as core 13 on socket 1 00:07:00.227 EAL: Detected lcore 24 as core 0 on socket 0 00:07:00.227 EAL: Detected lcore 25 as core 1 on socket 0 00:07:00.227 EAL: Detected lcore 26 as core 2 on socket 0 00:07:00.227 EAL: Detected lcore 27 as core 3 on socket 0 00:07:00.227 EAL: Detected lcore 28 as core 4 on socket 0 00:07:00.227 EAL: Detected lcore 29 as core 5 on socket 0 00:07:00.227 EAL: Detected lcore 30 as core 8 on socket 0 00:07:00.227 EAL: Detected lcore 31 as core 9 on socket 0 00:07:00.227 EAL: Detected lcore 32 as core 10 on socket 0 00:07:00.227 EAL: Detected lcore 33 as core 11 on socket 0 00:07:00.227 EAL: Detected lcore 34 as core 12 on socket 0 00:07:00.227 EAL: Detected lcore 35 as core 13 on socket 0 00:07:00.227 EAL: Detected lcore 36 as core 0 on socket 1 00:07:00.227 EAL: Detected lcore 37 as core 1 on socket 1 00:07:00.227 EAL: Detected lcore 38 as core 2 on socket 1 00:07:00.227 EAL: Detected lcore 39 as core 3 on socket 1 00:07:00.227 EAL: Detected lcore 40 as core 4 on socket 1 00:07:00.227 EAL: Detected lcore 41 as core 5 on socket 1 00:07:00.227 EAL: Detected lcore 42 as core 8 on socket 1 00:07:00.227 EAL: Detected lcore 43 as core 9 on socket 1 00:07:00.227 EAL: Detected lcore 44 as core 10 on socket 1 00:07:00.227 EAL: Detected lcore 45 as core 11 on socket 1 00:07:00.227 EAL: Detected lcore 46 as core 12 on socket 1 00:07:00.227 EAL: Detected lcore 47 as core 13 on socket 1 00:07:00.227 EAL: Maximum logical cores by configuration: 128 00:07:00.227 EAL: Detected CPU lcores: 48 00:07:00.227 EAL: Detected NUMA nodes: 2 00:07:00.227 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:00.227 EAL: Detected shared linkage of DPDK 00:07:00.228 EAL: No shared files mode enabled, IPC will be disabled 00:07:00.228 EAL: Bus pci wants IOVA as 'DC' 00:07:00.228 EAL: Buses did not request a specific IOVA mode. 00:07:00.228 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:00.228 EAL: Selected IOVA mode 'VA' 00:07:00.228 EAL: Probing VFIO support... 00:07:00.228 EAL: IOMMU type 1 (Type 1) is supported 00:07:00.228 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:00.228 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:00.228 EAL: VFIO support initialized 00:07:00.228 EAL: Ask a virtual area of 0x2e000 bytes 00:07:00.228 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:00.228 EAL: Setting up physically contiguous memory... 00:07:00.228 EAL: Setting maximum number of open files to 524288 00:07:00.228 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:00.228 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:00.228 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:00.228 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.228 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:00.228 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.228 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.228 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:00.228 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:00.228 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.228 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:00.228 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.228 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.228 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:00.228 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:00.228 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.228 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:00.228 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.228 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.228 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:00.228 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:00.228 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.228 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:00.228 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.228 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.228 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:00.228 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:00.228 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:00.228 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.228 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:00.228 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:00.228 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.228 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:00.228 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:00.228 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.228 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:00.228 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:00.228 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.228 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:00.228 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:00.228 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.228 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:00.228 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:00.228 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.228 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:00.228 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:00.228 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.228 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:00.228 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:00.228 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.228 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:00.228 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:00.228 EAL: Hugepages will be freed exactly as allocated. 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: TSC frequency is ~2700000 KHz 00:07:00.228 EAL: Main lcore 0 is ready (tid=7f63a850ba00;cpuset=[0]) 00:07:00.228 EAL: Trying to obtain current memory policy. 00:07:00.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.228 EAL: Restoring previous memory policy: 0 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was expanded by 2MB 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:00.228 EAL: Mem event callback 'spdk:(nil)' registered 00:07:00.228 00:07:00.228 00:07:00.228 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.228 http://cunit.sourceforge.net/ 00:07:00.228 00:07:00.228 00:07:00.228 Suite: components_suite 00:07:00.228 Test: vtophys_malloc_test ...passed 00:07:00.228 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:00.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.228 EAL: Restoring previous memory policy: 4 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was expanded by 4MB 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was shrunk by 4MB 00:07:00.228 EAL: Trying to obtain current memory policy. 00:07:00.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.228 EAL: Restoring previous memory policy: 4 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was expanded by 6MB 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was shrunk by 6MB 00:07:00.228 EAL: Trying to obtain current memory policy. 00:07:00.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.228 EAL: Restoring previous memory policy: 4 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was expanded by 10MB 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was shrunk by 10MB 00:07:00.228 EAL: Trying to obtain current memory policy. 00:07:00.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.228 EAL: Restoring previous memory policy: 4 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was expanded by 18MB 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was shrunk by 18MB 00:07:00.228 EAL: Trying to obtain current memory policy. 00:07:00.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.228 EAL: Restoring previous memory policy: 4 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was expanded by 34MB 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was shrunk by 34MB 00:07:00.228 EAL: Trying to obtain current memory policy. 00:07:00.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.228 EAL: Restoring previous memory policy: 4 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was expanded by 66MB 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was shrunk by 66MB 00:07:00.228 EAL: Trying to obtain current memory policy. 00:07:00.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.228 EAL: Restoring previous memory policy: 4 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was expanded by 130MB 00:07:00.228 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.228 EAL: request: mp_malloc_sync 00:07:00.228 EAL: No shared files mode enabled, IPC is disabled 00:07:00.228 EAL: Heap on socket 0 was shrunk by 130MB 00:07:00.228 EAL: Trying to obtain current memory policy. 00:07:00.228 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.487 EAL: Restoring previous memory policy: 4 00:07:00.487 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.487 EAL: request: mp_malloc_sync 00:07:00.487 EAL: No shared files mode enabled, IPC is disabled 00:07:00.487 EAL: Heap on socket 0 was expanded by 258MB 00:07:00.487 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.487 EAL: request: mp_malloc_sync 00:07:00.487 EAL: No shared files mode enabled, IPC is disabled 00:07:00.487 EAL: Heap on socket 0 was shrunk by 258MB 00:07:00.487 EAL: Trying to obtain current memory policy. 00:07:00.487 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.746 EAL: Restoring previous memory policy: 4 00:07:00.746 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.746 EAL: request: mp_malloc_sync 00:07:00.746 EAL: No shared files mode enabled, IPC is disabled 00:07:00.746 EAL: Heap on socket 0 was expanded by 514MB 00:07:00.746 EAL: Calling mem event callback 'spdk:(nil)' 00:07:00.746 EAL: request: mp_malloc_sync 00:07:00.746 EAL: No shared files mode enabled, IPC is disabled 00:07:00.746 EAL: Heap on socket 0 was shrunk by 514MB 00:07:00.746 EAL: Trying to obtain current memory policy. 00:07:00.746 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.004 EAL: Restoring previous memory policy: 4 00:07:01.004 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.004 EAL: request: mp_malloc_sync 00:07:01.004 EAL: No shared files mode enabled, IPC is disabled 00:07:01.004 EAL: Heap on socket 0 was expanded by 1026MB 00:07:01.261 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.519 EAL: request: mp_malloc_sync 00:07:01.519 EAL: No shared files mode enabled, IPC is disabled 00:07:01.519 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:01.519 passed 00:07:01.519 00:07:01.519 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.519 suites 1 1 n/a 0 0 00:07:01.519 tests 2 2 2 0 0 00:07:01.519 asserts 497 497 497 0 n/a 00:07:01.519 00:07:01.519 Elapsed time = 1.329 seconds 00:07:01.519 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.519 EAL: request: mp_malloc_sync 00:07:01.519 EAL: No shared files mode enabled, IPC is disabled 00:07:01.520 EAL: Heap on socket 0 was shrunk by 2MB 00:07:01.520 EAL: No shared files mode enabled, IPC is disabled 00:07:01.520 EAL: No shared files mode enabled, IPC is disabled 00:07:01.520 EAL: No shared files mode enabled, IPC is disabled 00:07:01.520 00:07:01.520 real 0m1.444s 00:07:01.520 user 0m0.853s 00:07:01.520 sys 0m0.560s 00:07:01.520 08:43:14 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.520 08:43:14 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:01.520 ************************************ 00:07:01.520 END TEST env_vtophys 00:07:01.520 ************************************ 00:07:01.520 08:43:14 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:01.520 08:43:14 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.520 08:43:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.520 08:43:14 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.520 ************************************ 00:07:01.520 START TEST env_pci 00:07:01.520 ************************************ 00:07:01.520 08:43:14 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:01.520 00:07:01.520 00:07:01.520 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.520 http://cunit.sourceforge.net/ 00:07:01.520 00:07:01.520 00:07:01.520 Suite: pci 00:07:01.520 Test: pci_hook ...[2024-11-06 08:43:14.786650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 699919 has claimed it 00:07:01.778 EAL: Cannot find device (10000:00:01.0) 00:07:01.778 EAL: Failed to attach device on primary process 00:07:01.778 passed 00:07:01.778 00:07:01.778 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.778 suites 1 1 n/a 0 0 00:07:01.778 tests 1 1 1 0 0 00:07:01.778 asserts 25 25 25 0 n/a 00:07:01.778 00:07:01.778 Elapsed time = 0.022 seconds 00:07:01.779 00:07:01.779 real 0m0.035s 00:07:01.779 user 0m0.009s 00:07:01.779 sys 0m0.026s 00:07:01.779 08:43:14 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.779 08:43:14 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:01.779 ************************************ 00:07:01.779 END TEST env_pci 00:07:01.779 ************************************ 00:07:01.779 08:43:14 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:01.779 08:43:14 env -- env/env.sh@15 -- # uname 00:07:01.779 08:43:14 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:01.779 08:43:14 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:01.779 08:43:14 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:01.779 08:43:14 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:01.779 08:43:14 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.779 08:43:14 env -- common/autotest_common.sh@10 -- # set +x 00:07:01.779 ************************************ 00:07:01.779 START TEST env_dpdk_post_init 00:07:01.779 ************************************ 00:07:01.779 08:43:14 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:01.779 EAL: Detected CPU lcores: 48 00:07:01.779 EAL: Detected NUMA nodes: 2 00:07:01.779 EAL: Detected shared linkage of DPDK 00:07:01.779 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:01.779 EAL: Selected IOVA mode 'VA' 00:07:01.779 EAL: VFIO support initialized 00:07:01.779 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:01.779 EAL: Using IOMMU type 1 (Type 1) 00:07:01.779 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:07:01.779 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:07:01.779 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:07:01.779 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:07:01.779 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:07:01.779 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:07:01.779 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:07:01.779 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:07:02.717 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:07:02.717 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:07:02.717 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:07:02.717 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:07:02.717 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:07:02.717 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:07:02.717 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:07:02.717 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:07:02.717 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:07:05.998 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:07:05.998 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:07:05.998 Starting DPDK initialization... 00:07:05.998 Starting SPDK post initialization... 00:07:05.998 SPDK NVMe probe 00:07:05.998 Attaching to 0000:0b:00.0 00:07:05.998 Attached to 0000:0b:00.0 00:07:05.998 Cleaning up... 00:07:05.998 00:07:05.998 real 0m4.333s 00:07:05.998 user 0m2.968s 00:07:05.998 sys 0m0.427s 00:07:05.998 08:43:19 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.998 08:43:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:05.998 ************************************ 00:07:05.998 END TEST env_dpdk_post_init 00:07:05.998 ************************************ 00:07:05.998 08:43:19 env -- env/env.sh@26 -- # uname 00:07:05.998 08:43:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:05.998 08:43:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:05.998 08:43:19 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.998 08:43:19 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.998 08:43:19 env -- common/autotest_common.sh@10 -- # set +x 00:07:05.998 ************************************ 00:07:05.998 START TEST env_mem_callbacks 00:07:05.998 ************************************ 00:07:05.998 08:43:19 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:05.998 EAL: Detected CPU lcores: 48 00:07:05.998 EAL: Detected NUMA nodes: 2 00:07:05.998 EAL: Detected shared linkage of DPDK 00:07:05.998 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:05.998 EAL: Selected IOVA mode 'VA' 00:07:05.998 EAL: VFIO support initialized 00:07:05.998 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:05.998 00:07:05.998 00:07:05.998 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.998 http://cunit.sourceforge.net/ 00:07:05.998 00:07:05.998 00:07:05.998 Suite: memory 00:07:05.999 Test: test ... 00:07:05.999 register 0x200000200000 2097152 00:07:05.999 malloc 3145728 00:07:05.999 register 0x200000400000 4194304 00:07:05.999 buf 0x200000500000 len 3145728 PASSED 00:07:05.999 malloc 64 00:07:05.999 buf 0x2000004fff40 len 64 PASSED 00:07:05.999 malloc 4194304 00:07:05.999 register 0x200000800000 6291456 00:07:05.999 buf 0x200000a00000 len 4194304 PASSED 00:07:05.999 free 0x200000500000 3145728 00:07:05.999 free 0x2000004fff40 64 00:07:05.999 unregister 0x200000400000 4194304 PASSED 00:07:05.999 free 0x200000a00000 4194304 00:07:05.999 unregister 0x200000800000 6291456 PASSED 00:07:05.999 malloc 8388608 00:07:06.257 register 0x200000400000 10485760 00:07:06.257 buf 0x200000600000 len 8388608 PASSED 00:07:06.257 free 0x200000600000 8388608 00:07:06.257 unregister 0x200000400000 10485760 PASSED 00:07:06.257 passed 00:07:06.257 00:07:06.257 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.257 suites 1 1 n/a 0 0 00:07:06.257 tests 1 1 1 0 0 00:07:06.257 asserts 15 15 15 0 n/a 00:07:06.257 00:07:06.257 Elapsed time = 0.005 seconds 00:07:06.257 00:07:06.257 real 0m0.048s 00:07:06.257 user 0m0.015s 00:07:06.257 sys 0m0.033s 00:07:06.257 08:43:19 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.257 08:43:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:06.257 ************************************ 00:07:06.257 END TEST env_mem_callbacks 00:07:06.257 ************************************ 00:07:06.257 00:07:06.257 real 0m6.406s 00:07:06.257 user 0m4.193s 00:07:06.257 sys 0m1.265s 00:07:06.257 08:43:19 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.257 08:43:19 env -- common/autotest_common.sh@10 -- # set +x 00:07:06.257 ************************************ 00:07:06.257 END TEST env 00:07:06.257 ************************************ 00:07:06.257 08:43:19 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:06.257 08:43:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.257 08:43:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.257 08:43:19 -- common/autotest_common.sh@10 -- # set +x 00:07:06.257 ************************************ 00:07:06.257 START TEST rpc 00:07:06.257 ************************************ 00:07:06.257 08:43:19 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:06.257 * Looking for test storage... 00:07:06.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:06.257 08:43:19 rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:06.257 08:43:19 rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:07:06.257 08:43:19 rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:06.257 08:43:19 rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:06.257 08:43:19 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.257 08:43:19 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.257 08:43:19 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.257 08:43:19 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.257 08:43:19 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.257 08:43:19 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.257 08:43:19 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.257 08:43:19 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.257 08:43:19 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.257 08:43:19 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.257 08:43:19 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.257 08:43:19 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:06.257 08:43:19 rpc -- scripts/common.sh@345 -- # : 1 00:07:06.257 08:43:19 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.257 08:43:19 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.257 08:43:19 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:06.257 08:43:19 rpc -- scripts/common.sh@353 -- # local d=1 00:07:06.257 08:43:19 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.258 08:43:19 rpc -- scripts/common.sh@355 -- # echo 1 00:07:06.258 08:43:19 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.258 08:43:19 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:06.258 08:43:19 rpc -- scripts/common.sh@353 -- # local d=2 00:07:06.258 08:43:19 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.258 08:43:19 rpc -- scripts/common.sh@355 -- # echo 2 00:07:06.258 08:43:19 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.258 08:43:19 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.258 08:43:19 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.258 08:43:19 rpc -- scripts/common.sh@368 -- # return 0 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:06.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.258 --rc genhtml_branch_coverage=1 00:07:06.258 --rc genhtml_function_coverage=1 00:07:06.258 --rc genhtml_legend=1 00:07:06.258 --rc geninfo_all_blocks=1 00:07:06.258 --rc geninfo_unexecuted_blocks=1 00:07:06.258 00:07:06.258 ' 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:06.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.258 --rc genhtml_branch_coverage=1 00:07:06.258 --rc genhtml_function_coverage=1 00:07:06.258 --rc genhtml_legend=1 00:07:06.258 --rc geninfo_all_blocks=1 00:07:06.258 --rc geninfo_unexecuted_blocks=1 00:07:06.258 00:07:06.258 ' 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:06.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.258 --rc genhtml_branch_coverage=1 00:07:06.258 --rc genhtml_function_coverage=1 00:07:06.258 --rc genhtml_legend=1 00:07:06.258 --rc geninfo_all_blocks=1 00:07:06.258 --rc geninfo_unexecuted_blocks=1 00:07:06.258 00:07:06.258 ' 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:06.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.258 --rc genhtml_branch_coverage=1 00:07:06.258 --rc genhtml_function_coverage=1 00:07:06.258 --rc genhtml_legend=1 00:07:06.258 --rc geninfo_all_blocks=1 00:07:06.258 --rc geninfo_unexecuted_blocks=1 00:07:06.258 00:07:06.258 ' 00:07:06.258 08:43:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=700579 00:07:06.258 08:43:19 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:06.258 08:43:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.258 08:43:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 700579 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@831 -- # '[' -z 700579 ']' 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.258 08:43:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.523 [2024-11-06 08:43:19.562166] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:06.523 [2024-11-06 08:43:19.562262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid700579 ] 00:07:06.523 [2024-11-06 08:43:19.630735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.523 [2024-11-06 08:43:19.685580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:06.523 [2024-11-06 08:43:19.685638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 700579' to capture a snapshot of events at runtime. 00:07:06.523 [2024-11-06 08:43:19.685666] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.523 [2024-11-06 08:43:19.685677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.523 [2024-11-06 08:43:19.685686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid700579 for offline analysis/debug. 00:07:06.523 [2024-11-06 08:43:19.686277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.781 08:43:19 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.781 08:43:19 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:06.781 08:43:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:06.781 08:43:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:06.781 08:43:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:06.781 08:43:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:06.781 08:43:19 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.781 08:43:19 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.781 08:43:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.781 ************************************ 00:07:06.781 START TEST rpc_integrity 00:07:06.781 ************************************ 00:07:06.781 08:43:19 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:06.781 08:43:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:06.781 08:43:19 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.781 08:43:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.781 08:43:19 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.781 08:43:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:06.781 08:43:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:06.781 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:06.782 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:06.782 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.782 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.782 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.782 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:06.782 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:06.782 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.782 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:06.782 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.782 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:06.782 { 00:07:06.782 "name": "Malloc0", 00:07:06.782 "aliases": [ 00:07:06.782 "173f6f27-9b02-40dd-95e9-d4a693b5972c" 00:07:06.782 ], 00:07:06.782 "product_name": "Malloc disk", 00:07:06.782 "block_size": 512, 00:07:06.782 "num_blocks": 16384, 00:07:06.782 "uuid": "173f6f27-9b02-40dd-95e9-d4a693b5972c", 00:07:06.782 "assigned_rate_limits": { 00:07:06.782 "rw_ios_per_sec": 0, 00:07:06.782 "rw_mbytes_per_sec": 0, 00:07:06.782 "r_mbytes_per_sec": 0, 00:07:06.782 "w_mbytes_per_sec": 0 00:07:06.782 }, 00:07:06.782 "claimed": false, 00:07:06.782 "zoned": false, 00:07:06.782 "supported_io_types": { 00:07:06.782 "read": true, 00:07:06.782 "write": true, 00:07:06.782 "unmap": true, 00:07:06.782 "flush": true, 00:07:06.782 "reset": true, 00:07:06.782 "nvme_admin": false, 00:07:06.782 "nvme_io": false, 00:07:06.782 "nvme_io_md": false, 00:07:06.782 "write_zeroes": true, 00:07:06.782 "zcopy": true, 00:07:06.782 "get_zone_info": false, 00:07:06.782 "zone_management": false, 00:07:06.782 "zone_append": false, 00:07:06.782 "compare": false, 00:07:06.782 "compare_and_write": false, 00:07:06.782 "abort": true, 00:07:06.782 "seek_hole": false, 00:07:06.782 "seek_data": false, 00:07:06.782 "copy": true, 00:07:06.782 "nvme_iov_md": false 00:07:06.782 }, 00:07:06.782 "memory_domains": [ 00:07:06.782 { 00:07:06.782 "dma_device_id": "system", 00:07:06.782 "dma_device_type": 1 00:07:06.782 }, 00:07:06.782 { 00:07:06.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.782 "dma_device_type": 2 00:07:06.782 } 00:07:06.782 ], 00:07:06.782 "driver_specific": {} 00:07:06.782 } 00:07:06.782 ]' 00:07:06.782 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:07.040 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:07.040 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:07.040 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.040 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.040 [2024-11-06 08:43:20.079567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:07.040 [2024-11-06 08:43:20.079630] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.040 [2024-11-06 08:43:20.079652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d28d20 00:07:07.040 [2024-11-06 08:43:20.079665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.040 [2024-11-06 08:43:20.081075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.040 [2024-11-06 08:43:20.081101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:07.040 Passthru0 00:07:07.040 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.040 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.041 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:07.041 { 00:07:07.041 "name": "Malloc0", 00:07:07.041 "aliases": [ 00:07:07.041 "173f6f27-9b02-40dd-95e9-d4a693b5972c" 00:07:07.041 ], 00:07:07.041 "product_name": "Malloc disk", 00:07:07.041 "block_size": 512, 00:07:07.041 "num_blocks": 16384, 00:07:07.041 "uuid": "173f6f27-9b02-40dd-95e9-d4a693b5972c", 00:07:07.041 "assigned_rate_limits": { 00:07:07.041 "rw_ios_per_sec": 0, 00:07:07.041 "rw_mbytes_per_sec": 0, 00:07:07.041 "r_mbytes_per_sec": 0, 00:07:07.041 "w_mbytes_per_sec": 0 00:07:07.041 }, 00:07:07.041 "claimed": true, 00:07:07.041 "claim_type": "exclusive_write", 00:07:07.041 "zoned": false, 00:07:07.041 "supported_io_types": { 00:07:07.041 "read": true, 00:07:07.041 "write": true, 00:07:07.041 "unmap": true, 00:07:07.041 "flush": true, 00:07:07.041 "reset": true, 00:07:07.041 "nvme_admin": false, 00:07:07.041 "nvme_io": false, 00:07:07.041 "nvme_io_md": false, 00:07:07.041 "write_zeroes": true, 00:07:07.041 "zcopy": true, 00:07:07.041 "get_zone_info": false, 00:07:07.041 "zone_management": false, 00:07:07.041 "zone_append": false, 00:07:07.041 "compare": false, 00:07:07.041 "compare_and_write": false, 00:07:07.041 "abort": true, 00:07:07.041 "seek_hole": false, 00:07:07.041 "seek_data": false, 00:07:07.041 "copy": true, 00:07:07.041 "nvme_iov_md": false 00:07:07.041 }, 00:07:07.041 "memory_domains": [ 00:07:07.041 { 00:07:07.041 "dma_device_id": "system", 00:07:07.041 "dma_device_type": 1 00:07:07.041 }, 00:07:07.041 { 00:07:07.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.041 "dma_device_type": 2 00:07:07.041 } 00:07:07.041 ], 00:07:07.041 "driver_specific": {} 00:07:07.041 }, 00:07:07.041 { 00:07:07.041 "name": "Passthru0", 00:07:07.041 "aliases": [ 00:07:07.041 "2808ccac-34d2-51ae-b8a6-58392d3bcd73" 00:07:07.041 ], 00:07:07.041 "product_name": "passthru", 00:07:07.041 "block_size": 512, 00:07:07.041 "num_blocks": 16384, 00:07:07.041 "uuid": "2808ccac-34d2-51ae-b8a6-58392d3bcd73", 00:07:07.041 "assigned_rate_limits": { 00:07:07.041 "rw_ios_per_sec": 0, 00:07:07.041 "rw_mbytes_per_sec": 0, 00:07:07.041 "r_mbytes_per_sec": 0, 00:07:07.041 "w_mbytes_per_sec": 0 00:07:07.041 }, 00:07:07.041 "claimed": false, 00:07:07.041 "zoned": false, 00:07:07.041 "supported_io_types": { 00:07:07.041 "read": true, 00:07:07.041 "write": true, 00:07:07.041 "unmap": true, 00:07:07.041 "flush": true, 00:07:07.041 "reset": true, 00:07:07.041 "nvme_admin": false, 00:07:07.041 "nvme_io": false, 00:07:07.041 "nvme_io_md": false, 00:07:07.041 "write_zeroes": true, 00:07:07.041 "zcopy": true, 00:07:07.041 "get_zone_info": false, 00:07:07.041 "zone_management": false, 00:07:07.041 "zone_append": false, 00:07:07.041 "compare": false, 00:07:07.041 "compare_and_write": false, 00:07:07.041 "abort": true, 00:07:07.041 "seek_hole": false, 00:07:07.041 "seek_data": false, 00:07:07.041 "copy": true, 00:07:07.041 "nvme_iov_md": false 00:07:07.041 }, 00:07:07.041 "memory_domains": [ 00:07:07.041 { 00:07:07.041 "dma_device_id": "system", 00:07:07.041 "dma_device_type": 1 00:07:07.041 }, 00:07:07.041 { 00:07:07.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.041 "dma_device_type": 2 00:07:07.041 } 00:07:07.041 ], 00:07:07.041 "driver_specific": { 00:07:07.041 "passthru": { 00:07:07.041 "name": "Passthru0", 00:07:07.041 "base_bdev_name": "Malloc0" 00:07:07.041 } 00:07:07.041 } 00:07:07.041 } 00:07:07.041 ]' 00:07:07.041 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:07.041 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:07.041 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.041 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.041 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.041 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:07.041 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:07.041 08:43:20 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:07.041 00:07:07.041 real 0m0.218s 00:07:07.041 user 0m0.142s 00:07:07.041 sys 0m0.016s 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.041 08:43:20 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 ************************************ 00:07:07.041 END TEST rpc_integrity 00:07:07.041 ************************************ 00:07:07.041 08:43:20 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:07.041 08:43:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.041 08:43:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.041 08:43:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 ************************************ 00:07:07.041 START TEST rpc_plugins 00:07:07.041 ************************************ 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:07.041 { 00:07:07.041 "name": "Malloc1", 00:07:07.041 "aliases": [ 00:07:07.041 "ac2e4303-d10b-4629-8dda-7160b71bb175" 00:07:07.041 ], 00:07:07.041 "product_name": "Malloc disk", 00:07:07.041 "block_size": 4096, 00:07:07.041 "num_blocks": 256, 00:07:07.041 "uuid": "ac2e4303-d10b-4629-8dda-7160b71bb175", 00:07:07.041 "assigned_rate_limits": { 00:07:07.041 "rw_ios_per_sec": 0, 00:07:07.041 "rw_mbytes_per_sec": 0, 00:07:07.041 "r_mbytes_per_sec": 0, 00:07:07.041 "w_mbytes_per_sec": 0 00:07:07.041 }, 00:07:07.041 "claimed": false, 00:07:07.041 "zoned": false, 00:07:07.041 "supported_io_types": { 00:07:07.041 "read": true, 00:07:07.041 "write": true, 00:07:07.041 "unmap": true, 00:07:07.041 "flush": true, 00:07:07.041 "reset": true, 00:07:07.041 "nvme_admin": false, 00:07:07.041 "nvme_io": false, 00:07:07.041 "nvme_io_md": false, 00:07:07.041 "write_zeroes": true, 00:07:07.041 "zcopy": true, 00:07:07.041 "get_zone_info": false, 00:07:07.041 "zone_management": false, 00:07:07.041 "zone_append": false, 00:07:07.041 "compare": false, 00:07:07.041 "compare_and_write": false, 00:07:07.041 "abort": true, 00:07:07.041 "seek_hole": false, 00:07:07.041 "seek_data": false, 00:07:07.041 "copy": true, 00:07:07.041 "nvme_iov_md": false 00:07:07.041 }, 00:07:07.041 "memory_domains": [ 00:07:07.041 { 00:07:07.041 "dma_device_id": "system", 00:07:07.041 "dma_device_type": 1 00:07:07.041 }, 00:07:07.041 { 00:07:07.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.041 "dma_device_type": 2 00:07:07.041 } 00:07:07.041 ], 00:07:07.041 "driver_specific": {} 00:07:07.041 } 00:07:07.041 ]' 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.041 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:07.041 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:07.300 08:43:20 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:07.300 00:07:07.300 real 0m0.116s 00:07:07.300 user 0m0.072s 00:07:07.300 sys 0m0.012s 00:07:07.300 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.300 08:43:20 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:07.300 ************************************ 00:07:07.300 END TEST rpc_plugins 00:07:07.300 ************************************ 00:07:07.300 08:43:20 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:07.300 08:43:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.300 08:43:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.300 08:43:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.300 ************************************ 00:07:07.300 START TEST rpc_trace_cmd_test 00:07:07.300 ************************************ 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:07.300 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid700579", 00:07:07.300 "tpoint_group_mask": "0x8", 00:07:07.300 "iscsi_conn": { 00:07:07.300 "mask": "0x2", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "scsi": { 00:07:07.300 "mask": "0x4", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "bdev": { 00:07:07.300 "mask": "0x8", 00:07:07.300 "tpoint_mask": "0xffffffffffffffff" 00:07:07.300 }, 00:07:07.300 "nvmf_rdma": { 00:07:07.300 "mask": "0x10", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "nvmf_tcp": { 00:07:07.300 "mask": "0x20", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "ftl": { 00:07:07.300 "mask": "0x40", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "blobfs": { 00:07:07.300 "mask": "0x80", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "dsa": { 00:07:07.300 "mask": "0x200", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "thread": { 00:07:07.300 "mask": "0x400", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "nvme_pcie": { 00:07:07.300 "mask": "0x800", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "iaa": { 00:07:07.300 "mask": "0x1000", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "nvme_tcp": { 00:07:07.300 "mask": "0x2000", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "bdev_nvme": { 00:07:07.300 "mask": "0x4000", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "sock": { 00:07:07.300 "mask": "0x8000", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "blob": { 00:07:07.300 "mask": "0x10000", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "bdev_raid": { 00:07:07.300 "mask": "0x20000", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 }, 00:07:07.300 "scheduler": { 00:07:07.300 "mask": "0x40000", 00:07:07.300 "tpoint_mask": "0x0" 00:07:07.300 } 00:07:07.300 }' 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:07.300 00:07:07.300 real 0m0.186s 00:07:07.300 user 0m0.162s 00:07:07.300 sys 0m0.016s 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.300 08:43:20 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.300 ************************************ 00:07:07.300 END TEST rpc_trace_cmd_test 00:07:07.301 ************************************ 00:07:07.559 08:43:20 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:07.559 08:43:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:07.559 08:43:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:07.559 08:43:20 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.559 08:43:20 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.559 08:43:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.559 ************************************ 00:07:07.559 START TEST rpc_daemon_integrity 00:07:07.559 ************************************ 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:07.559 { 00:07:07.559 "name": "Malloc2", 00:07:07.559 "aliases": [ 00:07:07.559 "51563036-3d1d-4aee-91a8-779c6865f3c3" 00:07:07.559 ], 00:07:07.559 "product_name": "Malloc disk", 00:07:07.559 "block_size": 512, 00:07:07.559 "num_blocks": 16384, 00:07:07.559 "uuid": "51563036-3d1d-4aee-91a8-779c6865f3c3", 00:07:07.559 "assigned_rate_limits": { 00:07:07.559 "rw_ios_per_sec": 0, 00:07:07.559 "rw_mbytes_per_sec": 0, 00:07:07.559 "r_mbytes_per_sec": 0, 00:07:07.559 "w_mbytes_per_sec": 0 00:07:07.559 }, 00:07:07.559 "claimed": false, 00:07:07.559 "zoned": false, 00:07:07.559 "supported_io_types": { 00:07:07.559 "read": true, 00:07:07.559 "write": true, 00:07:07.559 "unmap": true, 00:07:07.559 "flush": true, 00:07:07.559 "reset": true, 00:07:07.559 "nvme_admin": false, 00:07:07.559 "nvme_io": false, 00:07:07.559 "nvme_io_md": false, 00:07:07.559 "write_zeroes": true, 00:07:07.559 "zcopy": true, 00:07:07.559 "get_zone_info": false, 00:07:07.559 "zone_management": false, 00:07:07.559 "zone_append": false, 00:07:07.559 "compare": false, 00:07:07.559 "compare_and_write": false, 00:07:07.559 "abort": true, 00:07:07.559 "seek_hole": false, 00:07:07.559 "seek_data": false, 00:07:07.559 "copy": true, 00:07:07.559 "nvme_iov_md": false 00:07:07.559 }, 00:07:07.559 "memory_domains": [ 00:07:07.559 { 00:07:07.559 "dma_device_id": "system", 00:07:07.559 "dma_device_type": 1 00:07:07.559 }, 00:07:07.559 { 00:07:07.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.559 "dma_device_type": 2 00:07:07.559 } 00:07:07.559 ], 00:07:07.559 "driver_specific": {} 00:07:07.559 } 00:07:07.559 ]' 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.559 [2024-11-06 08:43:20.737504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:07.559 [2024-11-06 08:43:20.737570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.559 [2024-11-06 08:43:20.737594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be4fb0 00:07:07.559 [2024-11-06 08:43:20.737608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.559 [2024-11-06 08:43:20.739013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.559 [2024-11-06 08:43:20.739039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:07.559 Passthru0 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.559 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:07.559 { 00:07:07.559 "name": "Malloc2", 00:07:07.559 "aliases": [ 00:07:07.559 "51563036-3d1d-4aee-91a8-779c6865f3c3" 00:07:07.559 ], 00:07:07.559 "product_name": "Malloc disk", 00:07:07.559 "block_size": 512, 00:07:07.559 "num_blocks": 16384, 00:07:07.559 "uuid": "51563036-3d1d-4aee-91a8-779c6865f3c3", 00:07:07.559 "assigned_rate_limits": { 00:07:07.559 "rw_ios_per_sec": 0, 00:07:07.559 "rw_mbytes_per_sec": 0, 00:07:07.559 "r_mbytes_per_sec": 0, 00:07:07.559 "w_mbytes_per_sec": 0 00:07:07.559 }, 00:07:07.559 "claimed": true, 00:07:07.559 "claim_type": "exclusive_write", 00:07:07.559 "zoned": false, 00:07:07.559 "supported_io_types": { 00:07:07.559 "read": true, 00:07:07.559 "write": true, 00:07:07.559 "unmap": true, 00:07:07.559 "flush": true, 00:07:07.559 "reset": true, 00:07:07.559 "nvme_admin": false, 00:07:07.559 "nvme_io": false, 00:07:07.559 "nvme_io_md": false, 00:07:07.559 "write_zeroes": true, 00:07:07.559 "zcopy": true, 00:07:07.559 "get_zone_info": false, 00:07:07.559 "zone_management": false, 00:07:07.559 "zone_append": false, 00:07:07.559 "compare": false, 00:07:07.559 "compare_and_write": false, 00:07:07.559 "abort": true, 00:07:07.559 "seek_hole": false, 00:07:07.559 "seek_data": false, 00:07:07.559 "copy": true, 00:07:07.559 "nvme_iov_md": false 00:07:07.559 }, 00:07:07.559 "memory_domains": [ 00:07:07.559 { 00:07:07.559 "dma_device_id": "system", 00:07:07.559 "dma_device_type": 1 00:07:07.559 }, 00:07:07.559 { 00:07:07.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.559 "dma_device_type": 2 00:07:07.559 } 00:07:07.559 ], 00:07:07.559 "driver_specific": {} 00:07:07.559 }, 00:07:07.559 { 00:07:07.559 "name": "Passthru0", 00:07:07.559 "aliases": [ 00:07:07.559 "0fce68bc-d92f-5f6c-b1d1-8ed74f036df9" 00:07:07.559 ], 00:07:07.559 "product_name": "passthru", 00:07:07.559 "block_size": 512, 00:07:07.559 "num_blocks": 16384, 00:07:07.559 "uuid": "0fce68bc-d92f-5f6c-b1d1-8ed74f036df9", 00:07:07.559 "assigned_rate_limits": { 00:07:07.559 "rw_ios_per_sec": 0, 00:07:07.559 "rw_mbytes_per_sec": 0, 00:07:07.559 "r_mbytes_per_sec": 0, 00:07:07.559 "w_mbytes_per_sec": 0 00:07:07.559 }, 00:07:07.559 "claimed": false, 00:07:07.559 "zoned": false, 00:07:07.559 "supported_io_types": { 00:07:07.559 "read": true, 00:07:07.559 "write": true, 00:07:07.559 "unmap": true, 00:07:07.559 "flush": true, 00:07:07.559 "reset": true, 00:07:07.559 "nvme_admin": false, 00:07:07.559 "nvme_io": false, 00:07:07.559 "nvme_io_md": false, 00:07:07.559 "write_zeroes": true, 00:07:07.559 "zcopy": true, 00:07:07.559 "get_zone_info": false, 00:07:07.559 "zone_management": false, 00:07:07.559 "zone_append": false, 00:07:07.559 "compare": false, 00:07:07.559 "compare_and_write": false, 00:07:07.559 "abort": true, 00:07:07.559 "seek_hole": false, 00:07:07.559 "seek_data": false, 00:07:07.559 "copy": true, 00:07:07.559 "nvme_iov_md": false 00:07:07.559 }, 00:07:07.559 "memory_domains": [ 00:07:07.559 { 00:07:07.559 "dma_device_id": "system", 00:07:07.559 "dma_device_type": 1 00:07:07.559 }, 00:07:07.559 { 00:07:07.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.559 "dma_device_type": 2 00:07:07.559 } 00:07:07.559 ], 00:07:07.559 "driver_specific": { 00:07:07.559 "passthru": { 00:07:07.560 "name": "Passthru0", 00:07:07.560 "base_bdev_name": "Malloc2" 00:07:07.560 } 00:07:07.560 } 00:07:07.560 } 00:07:07.560 ]' 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:07.560 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:07.818 08:43:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:07.818 00:07:07.818 real 0m0.212s 00:07:07.818 user 0m0.143s 00:07:07.818 sys 0m0.017s 00:07:07.818 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.818 08:43:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:07.818 ************************************ 00:07:07.818 END TEST rpc_daemon_integrity 00:07:07.818 ************************************ 00:07:07.818 08:43:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:07.818 08:43:20 rpc -- rpc/rpc.sh@84 -- # killprocess 700579 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@950 -- # '[' -z 700579 ']' 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@954 -- # kill -0 700579 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@955 -- # uname 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 700579 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 700579' 00:07:07.818 killing process with pid 700579 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@969 -- # kill 700579 00:07:07.818 08:43:20 rpc -- common/autotest_common.sh@974 -- # wait 700579 00:07:08.078 00:07:08.078 real 0m1.965s 00:07:08.078 user 0m2.447s 00:07:08.078 sys 0m0.603s 00:07:08.078 08:43:21 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.078 08:43:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.078 ************************************ 00:07:08.078 END TEST rpc 00:07:08.078 ************************************ 00:07:08.078 08:43:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:08.078 08:43:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.078 08:43:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.078 08:43:21 -- common/autotest_common.sh@10 -- # set +x 00:07:08.336 ************************************ 00:07:08.336 START TEST skip_rpc 00:07:08.336 ************************************ 00:07:08.336 08:43:21 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:08.336 * Looking for test storage... 00:07:08.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.337 08:43:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:08.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.337 --rc genhtml_branch_coverage=1 00:07:08.337 --rc genhtml_function_coverage=1 00:07:08.337 --rc genhtml_legend=1 00:07:08.337 --rc geninfo_all_blocks=1 00:07:08.337 --rc geninfo_unexecuted_blocks=1 00:07:08.337 00:07:08.337 ' 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:08.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.337 --rc genhtml_branch_coverage=1 00:07:08.337 --rc genhtml_function_coverage=1 00:07:08.337 --rc genhtml_legend=1 00:07:08.337 --rc geninfo_all_blocks=1 00:07:08.337 --rc geninfo_unexecuted_blocks=1 00:07:08.337 00:07:08.337 ' 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:08.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.337 --rc genhtml_branch_coverage=1 00:07:08.337 --rc genhtml_function_coverage=1 00:07:08.337 --rc genhtml_legend=1 00:07:08.337 --rc geninfo_all_blocks=1 00:07:08.337 --rc geninfo_unexecuted_blocks=1 00:07:08.337 00:07:08.337 ' 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:08.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.337 --rc genhtml_branch_coverage=1 00:07:08.337 --rc genhtml_function_coverage=1 00:07:08.337 --rc genhtml_legend=1 00:07:08.337 --rc geninfo_all_blocks=1 00:07:08.337 --rc geninfo_unexecuted_blocks=1 00:07:08.337 00:07:08.337 ' 00:07:08.337 08:43:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:08.337 08:43:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:08.337 08:43:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.337 08:43:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.337 ************************************ 00:07:08.337 START TEST skip_rpc 00:07:08.337 ************************************ 00:07:08.337 08:43:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:08.337 08:43:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=701026 00:07:08.337 08:43:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:08.337 08:43:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:08.337 08:43:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:08.337 [2024-11-06 08:43:21.602252] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:08.337 [2024-11-06 08:43:21.602315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701026 ] 00:07:08.595 [2024-11-06 08:43:21.665697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.595 [2024-11-06 08:43:21.723130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.859 08:43:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 701026 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 701026 ']' 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 701026 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701026 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701026' 00:07:13.860 killing process with pid 701026 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 701026 00:07:13.860 08:43:26 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 701026 00:07:13.860 00:07:13.860 real 0m5.468s 00:07:13.860 user 0m5.174s 00:07:13.860 sys 0m0.308s 00:07:13.860 08:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.860 08:43:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.860 ************************************ 00:07:13.860 END TEST skip_rpc 00:07:13.860 ************************************ 00:07:13.860 08:43:27 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:13.860 08:43:27 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.860 08:43:27 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.860 08:43:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.860 ************************************ 00:07:13.860 START TEST skip_rpc_with_json 00:07:13.860 ************************************ 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=701719 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 701719 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 701719 ']' 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.860 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:13.860 [2024-11-06 08:43:27.127174] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:13.860 [2024-11-06 08:43:27.127284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid701719 ] 00:07:14.118 [2024-11-06 08:43:27.191746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.118 [2024-11-06 08:43:27.245143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 [2024-11-06 08:43:27.505611] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:14.377 request: 00:07:14.377 { 00:07:14.377 "trtype": "tcp", 00:07:14.377 "method": "nvmf_get_transports", 00:07:14.377 "req_id": 1 00:07:14.377 } 00:07:14.377 Got JSON-RPC error response 00:07:14.377 response: 00:07:14.377 { 00:07:14.377 "code": -19, 00:07:14.377 "message": "No such device" 00:07:14.377 } 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.377 [2024-11-06 08:43:27.513711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.377 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:14.378 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.378 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:14.635 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.636 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:14.636 { 00:07:14.636 "subsystems": [ 00:07:14.636 { 00:07:14.636 "subsystem": "fsdev", 00:07:14.636 "config": [ 00:07:14.636 { 00:07:14.636 "method": "fsdev_set_opts", 00:07:14.636 "params": { 00:07:14.636 "fsdev_io_pool_size": 65535, 00:07:14.636 "fsdev_io_cache_size": 256 00:07:14.636 } 00:07:14.636 } 00:07:14.636 ] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "vfio_user_target", 00:07:14.636 "config": null 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "keyring", 00:07:14.636 "config": [] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "iobuf", 00:07:14.636 "config": [ 00:07:14.636 { 00:07:14.636 "method": "iobuf_set_options", 00:07:14.636 "params": { 00:07:14.636 "small_pool_count": 8192, 00:07:14.636 "large_pool_count": 1024, 00:07:14.636 "small_bufsize": 8192, 00:07:14.636 "large_bufsize": 135168, 00:07:14.636 "enable_numa": false 00:07:14.636 } 00:07:14.636 } 00:07:14.636 ] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "sock", 00:07:14.636 "config": [ 00:07:14.636 { 00:07:14.636 "method": "sock_set_default_impl", 00:07:14.636 "params": { 00:07:14.636 "impl_name": "posix" 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "sock_impl_set_options", 00:07:14.636 "params": { 00:07:14.636 "impl_name": "ssl", 00:07:14.636 "recv_buf_size": 4096, 00:07:14.636 "send_buf_size": 4096, 00:07:14.636 "enable_recv_pipe": true, 00:07:14.636 "enable_quickack": false, 00:07:14.636 "enable_placement_id": 0, 00:07:14.636 "enable_zerocopy_send_server": true, 00:07:14.636 "enable_zerocopy_send_client": false, 00:07:14.636 "zerocopy_threshold": 0, 00:07:14.636 "tls_version": 0, 00:07:14.636 "enable_ktls": false 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "sock_impl_set_options", 00:07:14.636 "params": { 00:07:14.636 "impl_name": "posix", 00:07:14.636 "recv_buf_size": 2097152, 00:07:14.636 "send_buf_size": 2097152, 00:07:14.636 "enable_recv_pipe": true, 00:07:14.636 "enable_quickack": false, 00:07:14.636 "enable_placement_id": 0, 00:07:14.636 "enable_zerocopy_send_server": true, 00:07:14.636 "enable_zerocopy_send_client": false, 00:07:14.636 "zerocopy_threshold": 0, 00:07:14.636 "tls_version": 0, 00:07:14.636 "enable_ktls": false 00:07:14.636 } 00:07:14.636 } 00:07:14.636 ] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "vmd", 00:07:14.636 "config": [] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "accel", 00:07:14.636 "config": [ 00:07:14.636 { 00:07:14.636 "method": "accel_set_options", 00:07:14.636 "params": { 00:07:14.636 "small_cache_size": 128, 00:07:14.636 "large_cache_size": 16, 00:07:14.636 "task_count": 2048, 00:07:14.636 "sequence_count": 2048, 00:07:14.636 "buf_count": 2048 00:07:14.636 } 00:07:14.636 } 00:07:14.636 ] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "bdev", 00:07:14.636 "config": [ 00:07:14.636 { 00:07:14.636 "method": "bdev_set_options", 00:07:14.636 "params": { 00:07:14.636 "bdev_io_pool_size": 65535, 00:07:14.636 "bdev_io_cache_size": 256, 00:07:14.636 "bdev_auto_examine": true, 00:07:14.636 "iobuf_small_cache_size": 128, 00:07:14.636 "iobuf_large_cache_size": 16 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "bdev_raid_set_options", 00:07:14.636 "params": { 00:07:14.636 "process_window_size_kb": 1024, 00:07:14.636 "process_max_bandwidth_mb_sec": 0 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "bdev_iscsi_set_options", 00:07:14.636 "params": { 00:07:14.636 "timeout_sec": 30 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "bdev_nvme_set_options", 00:07:14.636 "params": { 00:07:14.636 "action_on_timeout": "none", 00:07:14.636 "timeout_us": 0, 00:07:14.636 "timeout_admin_us": 0, 00:07:14.636 "keep_alive_timeout_ms": 10000, 00:07:14.636 "arbitration_burst": 0, 00:07:14.636 "low_priority_weight": 0, 00:07:14.636 "medium_priority_weight": 0, 00:07:14.636 "high_priority_weight": 0, 00:07:14.636 "nvme_adminq_poll_period_us": 10000, 00:07:14.636 "nvme_ioq_poll_period_us": 0, 00:07:14.636 "io_queue_requests": 0, 00:07:14.636 "delay_cmd_submit": true, 00:07:14.636 "transport_retry_count": 4, 00:07:14.636 "bdev_retry_count": 3, 00:07:14.636 "transport_ack_timeout": 0, 00:07:14.636 "ctrlr_loss_timeout_sec": 0, 00:07:14.636 "reconnect_delay_sec": 0, 00:07:14.636 "fast_io_fail_timeout_sec": 0, 00:07:14.636 "disable_auto_failback": false, 00:07:14.636 "generate_uuids": false, 00:07:14.636 "transport_tos": 0, 00:07:14.636 "nvme_error_stat": false, 00:07:14.636 "rdma_srq_size": 0, 00:07:14.636 "io_path_stat": false, 00:07:14.636 "allow_accel_sequence": false, 00:07:14.636 "rdma_max_cq_size": 0, 00:07:14.636 "rdma_cm_event_timeout_ms": 0, 00:07:14.636 "dhchap_digests": [ 00:07:14.636 "sha256", 00:07:14.636 "sha384", 00:07:14.636 "sha512" 00:07:14.636 ], 00:07:14.636 "dhchap_dhgroups": [ 00:07:14.636 "null", 00:07:14.636 "ffdhe2048", 00:07:14.636 "ffdhe3072", 00:07:14.636 "ffdhe4096", 00:07:14.636 "ffdhe6144", 00:07:14.636 "ffdhe8192" 00:07:14.636 ] 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "bdev_nvme_set_hotplug", 00:07:14.636 "params": { 00:07:14.636 "period_us": 100000, 00:07:14.636 "enable": false 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "bdev_wait_for_examine" 00:07:14.636 } 00:07:14.636 ] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "scsi", 00:07:14.636 "config": null 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "scheduler", 00:07:14.636 "config": [ 00:07:14.636 { 00:07:14.636 "method": "framework_set_scheduler", 00:07:14.636 "params": { 00:07:14.636 "name": "static" 00:07:14.636 } 00:07:14.636 } 00:07:14.636 ] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "vhost_scsi", 00:07:14.636 "config": [] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "vhost_blk", 00:07:14.636 "config": [] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "ublk", 00:07:14.636 "config": [] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "nbd", 00:07:14.636 "config": [] 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "subsystem": "nvmf", 00:07:14.636 "config": [ 00:07:14.636 { 00:07:14.636 "method": "nvmf_set_config", 00:07:14.636 "params": { 00:07:14.636 "discovery_filter": "match_any", 00:07:14.636 "admin_cmd_passthru": { 00:07:14.636 "identify_ctrlr": false 00:07:14.636 }, 00:07:14.636 "dhchap_digests": [ 00:07:14.636 "sha256", 00:07:14.636 "sha384", 00:07:14.636 "sha512" 00:07:14.636 ], 00:07:14.636 "dhchap_dhgroups": [ 00:07:14.636 "null", 00:07:14.636 "ffdhe2048", 00:07:14.636 "ffdhe3072", 00:07:14.636 "ffdhe4096", 00:07:14.636 "ffdhe6144", 00:07:14.636 "ffdhe8192" 00:07:14.636 ] 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "nvmf_set_max_subsystems", 00:07:14.636 "params": { 00:07:14.636 "max_subsystems": 1024 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "nvmf_set_crdt", 00:07:14.636 "params": { 00:07:14.636 "crdt1": 0, 00:07:14.636 "crdt2": 0, 00:07:14.636 "crdt3": 0 00:07:14.636 } 00:07:14.636 }, 00:07:14.636 { 00:07:14.636 "method": "nvmf_create_transport", 00:07:14.636 "params": { 00:07:14.636 "trtype": "TCP", 00:07:14.636 "max_queue_depth": 128, 00:07:14.636 "max_io_qpairs_per_ctrlr": 127, 00:07:14.636 "in_capsule_data_size": 4096, 00:07:14.636 "max_io_size": 131072, 00:07:14.636 "io_unit_size": 131072, 00:07:14.636 "max_aq_depth": 128, 00:07:14.636 "num_shared_buffers": 511, 00:07:14.636 "buf_cache_size": 4294967295, 00:07:14.636 "dif_insert_or_strip": false, 00:07:14.636 "zcopy": false, 00:07:14.637 "c2h_success": true, 00:07:14.637 "sock_priority": 0, 00:07:14.637 "abort_timeout_sec": 1, 00:07:14.637 "ack_timeout": 0, 00:07:14.637 "data_wr_pool_size": 0 00:07:14.637 } 00:07:14.637 } 00:07:14.637 ] 00:07:14.637 }, 00:07:14.637 { 00:07:14.637 "subsystem": "iscsi", 00:07:14.637 "config": [ 00:07:14.637 { 00:07:14.637 "method": "iscsi_set_options", 00:07:14.637 "params": { 00:07:14.637 "node_base": "iqn.2016-06.io.spdk", 00:07:14.637 "max_sessions": 128, 00:07:14.637 "max_connections_per_session": 2, 00:07:14.637 "max_queue_depth": 64, 00:07:14.637 "default_time2wait": 2, 00:07:14.637 "default_time2retain": 20, 00:07:14.637 "first_burst_length": 8192, 00:07:14.637 "immediate_data": true, 00:07:14.637 "allow_duplicated_isid": false, 00:07:14.637 "error_recovery_level": 0, 00:07:14.637 "nop_timeout": 60, 00:07:14.637 "nop_in_interval": 30, 00:07:14.637 "disable_chap": false, 00:07:14.637 "require_chap": false, 00:07:14.637 "mutual_chap": false, 00:07:14.637 "chap_group": 0, 00:07:14.637 "max_large_datain_per_connection": 64, 00:07:14.637 "max_r2t_per_connection": 4, 00:07:14.637 "pdu_pool_size": 36864, 00:07:14.637 "immediate_data_pool_size": 16384, 00:07:14.637 "data_out_pool_size": 2048 00:07:14.637 } 00:07:14.637 } 00:07:14.637 ] 00:07:14.637 } 00:07:14.637 ] 00:07:14.637 } 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 701719 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 701719 ']' 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 701719 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701719 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701719' 00:07:14.637 killing process with pid 701719 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 701719 00:07:14.637 08:43:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 701719 00:07:14.895 08:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=701876 00:07:14.895 08:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:14.895 08:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 701876 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 701876 ']' 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 701876 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 701876 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 701876' 00:07:20.160 killing process with pid 701876 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 701876 00:07:20.160 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 701876 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:20.420 00:07:20.420 real 0m6.521s 00:07:20.420 user 0m6.153s 00:07:20.420 sys 0m0.672s 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:20.420 ************************************ 00:07:20.420 END TEST skip_rpc_with_json 00:07:20.420 ************************************ 00:07:20.420 08:43:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:20.420 08:43:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.420 08:43:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.420 08:43:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.420 ************************************ 00:07:20.420 START TEST skip_rpc_with_delay 00:07:20.420 ************************************ 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:20.420 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:20.420 [2024-11-06 08:43:33.696941] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:20.679 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:20.679 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.679 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.679 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.679 00:07:20.679 real 0m0.071s 00:07:20.679 user 0m0.045s 00:07:20.679 sys 0m0.026s 00:07:20.679 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.679 08:43:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:20.679 ************************************ 00:07:20.679 END TEST skip_rpc_with_delay 00:07:20.679 ************************************ 00:07:20.679 08:43:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:20.679 08:43:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:20.679 08:43:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:20.679 08:43:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.679 08:43:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.679 08:43:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.679 ************************************ 00:07:20.679 START TEST exit_on_failed_rpc_init 00:07:20.679 ************************************ 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=702588 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 702588 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 702588 ']' 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.679 08:43:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:20.679 [2024-11-06 08:43:33.816060] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:20.679 [2024-11-06 08:43:33.816157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702588 ] 00:07:20.679 [2024-11-06 08:43:33.881438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.679 [2024-11-06 08:43:33.940838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.937 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.937 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:20.937 08:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:20.938 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:21.196 [2024-11-06 08:43:34.261370] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:21.196 [2024-11-06 08:43:34.261442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702687 ] 00:07:21.196 [2024-11-06 08:43:34.325741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.196 [2024-11-06 08:43:34.385056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.196 [2024-11-06 08:43:34.385178] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:21.196 [2024-11-06 08:43:34.385197] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:21.196 [2024-11-06 08:43:34.385209] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 702588 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 702588 ']' 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 702588 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.196 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 702588 00:07:21.454 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.454 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.454 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 702588' 00:07:21.454 killing process with pid 702588 00:07:21.454 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 702588 00:07:21.454 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 702588 00:07:21.713 00:07:21.713 real 0m1.149s 00:07:21.713 user 0m1.285s 00:07:21.713 sys 0m0.404s 00:07:21.713 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.713 08:43:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:21.713 ************************************ 00:07:21.713 END TEST exit_on_failed_rpc_init 00:07:21.713 ************************************ 00:07:21.713 08:43:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:21.713 00:07:21.713 real 0m13.558s 00:07:21.713 user 0m12.851s 00:07:21.713 sys 0m1.584s 00:07:21.713 08:43:34 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.713 08:43:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.713 ************************************ 00:07:21.713 END TEST skip_rpc 00:07:21.713 ************************************ 00:07:21.713 08:43:34 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:21.713 08:43:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.713 08:43:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.713 08:43:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.713 ************************************ 00:07:21.713 START TEST rpc_client 00:07:21.713 ************************************ 00:07:21.713 08:43:34 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:21.971 * Looking for test storage... 00:07:21.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:21.971 08:43:35 rpc_client -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:21.971 08:43:35 rpc_client -- common/autotest_common.sh@1689 -- # lcov --version 00:07:21.971 08:43:35 rpc_client -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:21.971 08:43:35 rpc_client -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.971 08:43:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:21.972 08:43:35 rpc_client -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.972 08:43:35 rpc_client -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.972 --rc genhtml_branch_coverage=1 00:07:21.972 --rc genhtml_function_coverage=1 00:07:21.972 --rc genhtml_legend=1 00:07:21.972 --rc geninfo_all_blocks=1 00:07:21.972 --rc geninfo_unexecuted_blocks=1 00:07:21.972 00:07:21.972 ' 00:07:21.972 08:43:35 rpc_client -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.972 --rc genhtml_branch_coverage=1 00:07:21.972 --rc genhtml_function_coverage=1 00:07:21.972 --rc genhtml_legend=1 00:07:21.972 --rc geninfo_all_blocks=1 00:07:21.972 --rc geninfo_unexecuted_blocks=1 00:07:21.972 00:07:21.972 ' 00:07:21.972 08:43:35 rpc_client -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.972 --rc genhtml_branch_coverage=1 00:07:21.972 --rc genhtml_function_coverage=1 00:07:21.972 --rc genhtml_legend=1 00:07:21.972 --rc geninfo_all_blocks=1 00:07:21.972 --rc geninfo_unexecuted_blocks=1 00:07:21.972 00:07:21.972 ' 00:07:21.972 08:43:35 rpc_client -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.972 --rc genhtml_branch_coverage=1 00:07:21.972 --rc genhtml_function_coverage=1 00:07:21.972 --rc genhtml_legend=1 00:07:21.972 --rc geninfo_all_blocks=1 00:07:21.972 --rc geninfo_unexecuted_blocks=1 00:07:21.972 00:07:21.972 ' 00:07:21.972 08:43:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:21.972 OK 00:07:21.972 08:43:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:21.972 00:07:21.972 real 0m0.159s 00:07:21.972 user 0m0.108s 00:07:21.972 sys 0m0.061s 00:07:21.972 08:43:35 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.972 08:43:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:21.972 ************************************ 00:07:21.972 END TEST rpc_client 00:07:21.972 ************************************ 00:07:21.972 08:43:35 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:21.972 08:43:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.972 08:43:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.972 08:43:35 -- common/autotest_common.sh@10 -- # set +x 00:07:21.972 ************************************ 00:07:21.972 START TEST json_config 00:07:21.972 ************************************ 00:07:21.972 08:43:35 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:21.972 08:43:35 json_config -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:21.972 08:43:35 json_config -- common/autotest_common.sh@1689 -- # lcov --version 00:07:21.972 08:43:35 json_config -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:22.231 08:43:35 json_config -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:22.231 08:43:35 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.231 08:43:35 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.231 08:43:35 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.231 08:43:35 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.231 08:43:35 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.231 08:43:35 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.231 08:43:35 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.231 08:43:35 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.231 08:43:35 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.231 08:43:35 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.231 08:43:35 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.231 08:43:35 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:22.231 08:43:35 json_config -- scripts/common.sh@345 -- # : 1 00:07:22.231 08:43:35 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.231 08:43:35 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.231 08:43:35 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:22.231 08:43:35 json_config -- scripts/common.sh@353 -- # local d=1 00:07:22.231 08:43:35 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.231 08:43:35 json_config -- scripts/common.sh@355 -- # echo 1 00:07:22.231 08:43:35 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.231 08:43:35 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:22.231 08:43:35 json_config -- scripts/common.sh@353 -- # local d=2 00:07:22.231 08:43:35 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.231 08:43:35 json_config -- scripts/common.sh@355 -- # echo 2 00:07:22.231 08:43:35 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.231 08:43:35 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.231 08:43:35 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.231 08:43:35 json_config -- scripts/common.sh@368 -- # return 0 00:07:22.231 08:43:35 json_config -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.231 08:43:35 json_config -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:22.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.231 --rc genhtml_branch_coverage=1 00:07:22.231 --rc genhtml_function_coverage=1 00:07:22.231 --rc genhtml_legend=1 00:07:22.231 --rc geninfo_all_blocks=1 00:07:22.231 --rc geninfo_unexecuted_blocks=1 00:07:22.231 00:07:22.231 ' 00:07:22.231 08:43:35 json_config -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:22.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.231 --rc genhtml_branch_coverage=1 00:07:22.231 --rc genhtml_function_coverage=1 00:07:22.231 --rc genhtml_legend=1 00:07:22.231 --rc geninfo_all_blocks=1 00:07:22.231 --rc geninfo_unexecuted_blocks=1 00:07:22.231 00:07:22.231 ' 00:07:22.231 08:43:35 json_config -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:22.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.231 --rc genhtml_branch_coverage=1 00:07:22.231 --rc genhtml_function_coverage=1 00:07:22.231 --rc genhtml_legend=1 00:07:22.231 --rc geninfo_all_blocks=1 00:07:22.231 --rc geninfo_unexecuted_blocks=1 00:07:22.231 00:07:22.231 ' 00:07:22.231 08:43:35 json_config -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:22.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.232 --rc genhtml_branch_coverage=1 00:07:22.232 --rc genhtml_function_coverage=1 00:07:22.232 --rc genhtml_legend=1 00:07:22.232 --rc geninfo_all_blocks=1 00:07:22.232 --rc geninfo_unexecuted_blocks=1 00:07:22.232 00:07:22.232 ' 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.232 08:43:35 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.232 08:43:35 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.232 08:43:35 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.232 08:43:35 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.232 08:43:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.232 08:43:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.232 08:43:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.232 08:43:35 json_config -- paths/export.sh@5 -- # export PATH 00:07:22.232 08:43:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@51 -- # : 0 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.232 08:43:35 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:22.232 INFO: JSON configuration test init 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.232 08:43:35 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:22.232 08:43:35 json_config -- json_config/common.sh@9 -- # local app=target 00:07:22.232 08:43:35 json_config -- json_config/common.sh@10 -- # shift 00:07:22.232 08:43:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:22.232 08:43:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:22.232 08:43:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:22.232 08:43:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:22.232 08:43:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:22.232 08:43:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=702968 00:07:22.232 08:43:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:22.232 08:43:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:22.232 Waiting for target to run... 00:07:22.232 08:43:35 json_config -- json_config/common.sh@25 -- # waitforlisten 702968 /var/tmp/spdk_tgt.sock 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@831 -- # '[' -z 702968 ']' 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:22.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.232 08:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.232 [2024-11-06 08:43:35.400739] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:22.232 [2024-11-06 08:43:35.400858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702968 ] 00:07:22.798 [2024-11-06 08:43:35.937739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.798 [2024-11-06 08:43:35.991194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.364 08:43:36 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.364 08:43:36 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:23.364 08:43:36 json_config -- json_config/common.sh@26 -- # echo '' 00:07:23.364 00:07:23.364 08:43:36 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:23.364 08:43:36 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:23.364 08:43:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.364 08:43:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.364 08:43:36 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:23.364 08:43:36 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:23.364 08:43:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.364 08:43:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.364 08:43:36 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:23.364 08:43:36 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:23.364 08:43:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:26.649 08:43:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.649 08:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:26.649 08:43:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@54 -- # sort 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:26.649 08:43:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:26.649 08:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:26.649 08:43:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.649 08:43:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:26.649 08:43:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:26.649 08:43:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:26.907 MallocForNvmf0 00:07:26.907 08:43:40 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:26.907 08:43:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:27.165 MallocForNvmf1 00:07:27.165 08:43:40 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:27.165 08:43:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:27.423 [2024-11-06 08:43:40.696487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.681 08:43:40 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.681 08:43:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.938 08:43:41 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:27.938 08:43:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:28.195 08:43:41 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:28.195 08:43:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:28.452 08:43:41 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:28.452 08:43:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:28.709 [2024-11-06 08:43:41.771838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:28.709 08:43:41 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:28.709 08:43:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.709 08:43:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.709 08:43:41 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:28.709 08:43:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.709 08:43:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.709 08:43:41 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:28.709 08:43:41 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:28.709 08:43:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:28.967 MallocBdevForConfigChangeCheck 00:07:28.967 08:43:42 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:28.967 08:43:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.967 08:43:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.967 08:43:42 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:28.967 08:43:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:29.533 08:43:42 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:29.533 INFO: shutting down applications... 00:07:29.533 08:43:42 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:29.533 08:43:42 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:29.533 08:43:42 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:29.533 08:43:42 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:30.908 Calling clear_iscsi_subsystem 00:07:30.908 Calling clear_nvmf_subsystem 00:07:30.908 Calling clear_nbd_subsystem 00:07:30.908 Calling clear_ublk_subsystem 00:07:30.908 Calling clear_vhost_blk_subsystem 00:07:30.908 Calling clear_vhost_scsi_subsystem 00:07:30.908 Calling clear_bdev_subsystem 00:07:30.908 08:43:44 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:30.908 08:43:44 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:30.908 08:43:44 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:30.908 08:43:44 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:30.908 08:43:44 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:30.908 08:43:44 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:31.475 08:43:44 json_config -- json_config/json_config.sh@352 -- # break 00:07:31.475 08:43:44 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:31.475 08:43:44 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:31.475 08:43:44 json_config -- json_config/common.sh@31 -- # local app=target 00:07:31.475 08:43:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:31.475 08:43:44 json_config -- json_config/common.sh@35 -- # [[ -n 702968 ]] 00:07:31.475 08:43:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 702968 00:07:31.475 08:43:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:31.475 08:43:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:31.475 08:43:44 json_config -- json_config/common.sh@41 -- # kill -0 702968 00:07:31.475 08:43:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:32.043 08:43:45 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:32.043 08:43:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:32.043 08:43:45 json_config -- json_config/common.sh@41 -- # kill -0 702968 00:07:32.043 08:43:45 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:32.043 08:43:45 json_config -- json_config/common.sh@43 -- # break 00:07:32.043 08:43:45 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:32.043 08:43:45 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:32.043 SPDK target shutdown done 00:07:32.043 08:43:45 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:32.043 INFO: relaunching applications... 00:07:32.043 08:43:45 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:32.043 08:43:45 json_config -- json_config/common.sh@9 -- # local app=target 00:07:32.043 08:43:45 json_config -- json_config/common.sh@10 -- # shift 00:07:32.043 08:43:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:32.043 08:43:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:32.043 08:43:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:32.043 08:43:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:32.043 08:43:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:32.043 08:43:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=704187 00:07:32.043 08:43:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:32.043 08:43:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:32.043 Waiting for target to run... 00:07:32.043 08:43:45 json_config -- json_config/common.sh@25 -- # waitforlisten 704187 /var/tmp/spdk_tgt.sock 00:07:32.043 08:43:45 json_config -- common/autotest_common.sh@831 -- # '[' -z 704187 ']' 00:07:32.043 08:43:45 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:32.043 08:43:45 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.043 08:43:45 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:32.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:32.043 08:43:45 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.043 08:43:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:32.043 [2024-11-06 08:43:45.155187] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:32.043 [2024-11-06 08:43:45.155308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704187 ] 00:07:32.613 [2024-11-06 08:43:45.718229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.613 [2024-11-06 08:43:45.770215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.920 [2024-11-06 08:43:48.820498] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.920 [2024-11-06 08:43:48.852978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:35.920 08:43:48 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.920 08:43:48 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:35.920 08:43:48 json_config -- json_config/common.sh@26 -- # echo '' 00:07:35.920 00:07:35.920 08:43:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:35.920 08:43:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:35.920 INFO: Checking if target configuration is the same... 00:07:35.920 08:43:48 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:35.920 08:43:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:35.920 08:43:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:35.920 + '[' 2 -ne 2 ']' 00:07:35.920 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:35.920 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:35.920 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:35.920 +++ basename /dev/fd/62 00:07:35.920 ++ mktemp /tmp/62.XXX 00:07:35.920 + tmp_file_1=/tmp/62.sqT 00:07:35.920 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:35.920 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:35.920 + tmp_file_2=/tmp/spdk_tgt_config.json.71s 00:07:35.920 + ret=0 00:07:35.920 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:36.177 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:36.177 + diff -u /tmp/62.sqT /tmp/spdk_tgt_config.json.71s 00:07:36.177 + echo 'INFO: JSON config files are the same' 00:07:36.177 INFO: JSON config files are the same 00:07:36.177 + rm /tmp/62.sqT /tmp/spdk_tgt_config.json.71s 00:07:36.177 + exit 0 00:07:36.177 08:43:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:36.177 08:43:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:36.177 INFO: changing configuration and checking if this can be detected... 00:07:36.177 08:43:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:36.177 08:43:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:36.434 08:43:49 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:36.434 08:43:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:36.434 08:43:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:36.434 + '[' 2 -ne 2 ']' 00:07:36.434 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:36.434 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:36.434 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:36.434 +++ basename /dev/fd/62 00:07:36.434 ++ mktemp /tmp/62.XXX 00:07:36.434 + tmp_file_1=/tmp/62.zvq 00:07:36.434 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:36.434 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:36.434 + tmp_file_2=/tmp/spdk_tgt_config.json.2uE 00:07:36.434 + ret=0 00:07:36.434 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:36.999 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:36.999 + diff -u /tmp/62.zvq /tmp/spdk_tgt_config.json.2uE 00:07:36.999 + ret=1 00:07:36.999 + echo '=== Start of file: /tmp/62.zvq ===' 00:07:36.999 + cat /tmp/62.zvq 00:07:36.999 + echo '=== End of file: /tmp/62.zvq ===' 00:07:36.999 + echo '' 00:07:36.999 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2uE ===' 00:07:36.999 + cat /tmp/spdk_tgt_config.json.2uE 00:07:36.999 + echo '=== End of file: /tmp/spdk_tgt_config.json.2uE ===' 00:07:36.999 + echo '' 00:07:36.999 + rm /tmp/62.zvq /tmp/spdk_tgt_config.json.2uE 00:07:36.999 + exit 1 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:36.999 INFO: configuration change detected. 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@324 -- # [[ -n 704187 ]] 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.999 08:43:50 json_config -- json_config/json_config.sh@330 -- # killprocess 704187 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@950 -- # '[' -z 704187 ']' 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@954 -- # kill -0 704187 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@955 -- # uname 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 704187 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 704187' 00:07:36.999 killing process with pid 704187 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@969 -- # kill 704187 00:07:36.999 08:43:50 json_config -- common/autotest_common.sh@974 -- # wait 704187 00:07:38.899 08:43:51 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:38.899 08:43:51 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:38.899 08:43:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.900 08:43:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.900 08:43:51 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:38.900 08:43:51 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:38.900 INFO: Success 00:07:38.900 00:07:38.900 real 0m16.506s 00:07:38.900 user 0m17.899s 00:07:38.900 sys 0m2.876s 00:07:38.900 08:43:51 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.900 08:43:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.900 ************************************ 00:07:38.900 END TEST json_config 00:07:38.900 ************************************ 00:07:38.900 08:43:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:38.900 08:43:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.900 08:43:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.900 08:43:51 -- common/autotest_common.sh@10 -- # set +x 00:07:38.900 ************************************ 00:07:38.900 START TEST json_config_extra_key 00:07:38.900 ************************************ 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1689 -- # lcov --version 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:38.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.900 --rc genhtml_branch_coverage=1 00:07:38.900 --rc genhtml_function_coverage=1 00:07:38.900 --rc genhtml_legend=1 00:07:38.900 --rc geninfo_all_blocks=1 00:07:38.900 --rc geninfo_unexecuted_blocks=1 00:07:38.900 00:07:38.900 ' 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:38.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.900 --rc genhtml_branch_coverage=1 00:07:38.900 --rc genhtml_function_coverage=1 00:07:38.900 --rc genhtml_legend=1 00:07:38.900 --rc geninfo_all_blocks=1 00:07:38.900 --rc geninfo_unexecuted_blocks=1 00:07:38.900 00:07:38.900 ' 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:38.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.900 --rc genhtml_branch_coverage=1 00:07:38.900 --rc genhtml_function_coverage=1 00:07:38.900 --rc genhtml_legend=1 00:07:38.900 --rc geninfo_all_blocks=1 00:07:38.900 --rc geninfo_unexecuted_blocks=1 00:07:38.900 00:07:38.900 ' 00:07:38.900 08:43:51 json_config_extra_key -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:38.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.900 --rc genhtml_branch_coverage=1 00:07:38.900 --rc genhtml_function_coverage=1 00:07:38.900 --rc genhtml_legend=1 00:07:38.900 --rc geninfo_all_blocks=1 00:07:38.900 --rc geninfo_unexecuted_blocks=1 00:07:38.900 00:07:38.900 ' 00:07:38.900 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.900 08:43:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.900 08:43:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.900 08:43:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.900 08:43:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.900 08:43:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:38.900 08:43:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.900 08:43:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.901 08:43:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.901 08:43:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.901 08:43:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.901 08:43:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.901 08:43:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.901 08:43:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.901 08:43:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:38.901 INFO: launching applications... 00:07:38.901 08:43:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=705102 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:38.901 Waiting for target to run... 00:07:38.901 08:43:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 705102 /var/tmp/spdk_tgt.sock 00:07:38.901 08:43:51 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 705102 ']' 00:07:38.901 08:43:51 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:38.901 08:43:51 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.901 08:43:51 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:38.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:38.901 08:43:51 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.901 08:43:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:38.901 [2024-11-06 08:43:51.949187] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:38.901 [2024-11-06 08:43:51.949267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705102 ] 00:07:39.159 [2024-11-06 08:43:52.287316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.159 [2024-11-06 08:43:52.329411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.724 08:43:52 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:39.724 08:43:52 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:39.724 08:43:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:39.724 00:07:39.724 08:43:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:39.724 INFO: shutting down applications... 00:07:39.724 08:43:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:39.724 08:43:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:39.724 08:43:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:39.724 08:43:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 705102 ]] 00:07:39.724 08:43:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 705102 00:07:39.724 08:43:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:39.724 08:43:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:39.724 08:43:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 705102 00:07:39.724 08:43:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:40.289 08:43:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:40.289 08:43:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:40.289 08:43:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 705102 00:07:40.289 08:43:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:40.289 08:43:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:40.289 08:43:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:40.289 08:43:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:40.289 SPDK target shutdown done 00:07:40.289 08:43:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:40.289 Success 00:07:40.289 00:07:40.289 real 0m1.678s 00:07:40.289 user 0m1.693s 00:07:40.289 sys 0m0.428s 00:07:40.289 08:43:53 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.289 08:43:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:40.289 ************************************ 00:07:40.289 END TEST json_config_extra_key 00:07:40.289 ************************************ 00:07:40.289 08:43:53 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:40.289 08:43:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:40.289 08:43:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.289 08:43:53 -- common/autotest_common.sh@10 -- # set +x 00:07:40.289 ************************************ 00:07:40.289 START TEST alias_rpc 00:07:40.289 ************************************ 00:07:40.289 08:43:53 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:40.289 * Looking for test storage... 00:07:40.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:40.289 08:43:53 alias_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:40.289 08:43:53 alias_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:07:40.289 08:43:53 alias_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.548 08:43:53 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:40.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.548 --rc genhtml_branch_coverage=1 00:07:40.548 --rc genhtml_function_coverage=1 00:07:40.548 --rc genhtml_legend=1 00:07:40.548 --rc geninfo_all_blocks=1 00:07:40.548 --rc geninfo_unexecuted_blocks=1 00:07:40.548 00:07:40.548 ' 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:40.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.548 --rc genhtml_branch_coverage=1 00:07:40.548 --rc genhtml_function_coverage=1 00:07:40.548 --rc genhtml_legend=1 00:07:40.548 --rc geninfo_all_blocks=1 00:07:40.548 --rc geninfo_unexecuted_blocks=1 00:07:40.548 00:07:40.548 ' 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:40.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.548 --rc genhtml_branch_coverage=1 00:07:40.548 --rc genhtml_function_coverage=1 00:07:40.548 --rc genhtml_legend=1 00:07:40.548 --rc geninfo_all_blocks=1 00:07:40.548 --rc geninfo_unexecuted_blocks=1 00:07:40.548 00:07:40.548 ' 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:40.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.548 --rc genhtml_branch_coverage=1 00:07:40.548 --rc genhtml_function_coverage=1 00:07:40.548 --rc genhtml_legend=1 00:07:40.548 --rc geninfo_all_blocks=1 00:07:40.548 --rc geninfo_unexecuted_blocks=1 00:07:40.548 00:07:40.548 ' 00:07:40.548 08:43:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:40.548 08:43:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=705415 00:07:40.548 08:43:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:40.548 08:43:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 705415 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 705415 ']' 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.548 08:43:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.548 [2024-11-06 08:43:53.690274] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:40.548 [2024-11-06 08:43:53.690353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705415 ] 00:07:40.548 [2024-11-06 08:43:53.758523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.548 [2024-11-06 08:43:53.816066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.807 08:43:54 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.807 08:43:54 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:40.807 08:43:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:41.371 08:43:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 705415 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 705415 ']' 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 705415 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705415 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705415' 00:07:41.371 killing process with pid 705415 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@969 -- # kill 705415 00:07:41.371 08:43:54 alias_rpc -- common/autotest_common.sh@974 -- # wait 705415 00:07:41.629 00:07:41.629 real 0m1.358s 00:07:41.629 user 0m1.481s 00:07:41.629 sys 0m0.441s 00:07:41.629 08:43:54 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.629 08:43:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.629 ************************************ 00:07:41.629 END TEST alias_rpc 00:07:41.629 ************************************ 00:07:41.629 08:43:54 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:41.629 08:43:54 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:41.629 08:43:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.629 08:43:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.629 08:43:54 -- common/autotest_common.sh@10 -- # set +x 00:07:41.629 ************************************ 00:07:41.629 START TEST spdkcli_tcp 00:07:41.629 ************************************ 00:07:41.629 08:43:54 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:41.888 * Looking for test storage... 00:07:41.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:41.888 08:43:54 spdkcli_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:41.888 08:43:54 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:07:41.888 08:43:54 spdkcli_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:41.888 08:43:55 spdkcli_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.888 08:43:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:41.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.889 --rc genhtml_branch_coverage=1 00:07:41.889 --rc genhtml_function_coverage=1 00:07:41.889 --rc genhtml_legend=1 00:07:41.889 --rc geninfo_all_blocks=1 00:07:41.889 --rc geninfo_unexecuted_blocks=1 00:07:41.889 00:07:41.889 ' 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:41.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.889 --rc genhtml_branch_coverage=1 00:07:41.889 --rc genhtml_function_coverage=1 00:07:41.889 --rc genhtml_legend=1 00:07:41.889 --rc geninfo_all_blocks=1 00:07:41.889 --rc geninfo_unexecuted_blocks=1 00:07:41.889 00:07:41.889 ' 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:41.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.889 --rc genhtml_branch_coverage=1 00:07:41.889 --rc genhtml_function_coverage=1 00:07:41.889 --rc genhtml_legend=1 00:07:41.889 --rc geninfo_all_blocks=1 00:07:41.889 --rc geninfo_unexecuted_blocks=1 00:07:41.889 00:07:41.889 ' 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:41.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.889 --rc genhtml_branch_coverage=1 00:07:41.889 --rc genhtml_function_coverage=1 00:07:41.889 --rc genhtml_legend=1 00:07:41.889 --rc geninfo_all_blocks=1 00:07:41.889 --rc geninfo_unexecuted_blocks=1 00:07:41.889 00:07:41.889 ' 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=705615 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:41.889 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 705615 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 705615 ']' 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.889 08:43:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.889 [2024-11-06 08:43:55.100790] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:41.889 [2024-11-06 08:43:55.100892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705615 ] 00:07:41.889 [2024-11-06 08:43:55.164727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.147 [2024-11-06 08:43:55.225854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.147 [2024-11-06 08:43:55.225863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.405 08:43:55 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.405 08:43:55 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:42.405 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=705638 00:07:42.405 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:42.405 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:42.664 [ 00:07:42.664 "bdev_malloc_delete", 00:07:42.664 "bdev_malloc_create", 00:07:42.664 "bdev_null_resize", 00:07:42.664 "bdev_null_delete", 00:07:42.664 "bdev_null_create", 00:07:42.664 "bdev_nvme_cuse_unregister", 00:07:42.664 "bdev_nvme_cuse_register", 00:07:42.664 "bdev_opal_new_user", 00:07:42.664 "bdev_opal_set_lock_state", 00:07:42.664 "bdev_opal_delete", 00:07:42.664 "bdev_opal_get_info", 00:07:42.664 "bdev_opal_create", 00:07:42.664 "bdev_nvme_opal_revert", 00:07:42.664 "bdev_nvme_opal_init", 00:07:42.664 "bdev_nvme_send_cmd", 00:07:42.664 "bdev_nvme_set_keys", 00:07:42.664 "bdev_nvme_get_path_iostat", 00:07:42.664 "bdev_nvme_get_mdns_discovery_info", 00:07:42.664 "bdev_nvme_stop_mdns_discovery", 00:07:42.664 "bdev_nvme_start_mdns_discovery", 00:07:42.664 "bdev_nvme_set_multipath_policy", 00:07:42.664 "bdev_nvme_set_preferred_path", 00:07:42.664 "bdev_nvme_get_io_paths", 00:07:42.664 "bdev_nvme_remove_error_injection", 00:07:42.664 "bdev_nvme_add_error_injection", 00:07:42.664 "bdev_nvme_get_discovery_info", 00:07:42.664 "bdev_nvme_stop_discovery", 00:07:42.664 "bdev_nvme_start_discovery", 00:07:42.664 "bdev_nvme_get_controller_health_info", 00:07:42.664 "bdev_nvme_disable_controller", 00:07:42.664 "bdev_nvme_enable_controller", 00:07:42.664 "bdev_nvme_reset_controller", 00:07:42.664 "bdev_nvme_get_transport_statistics", 00:07:42.664 "bdev_nvme_apply_firmware", 00:07:42.664 "bdev_nvme_detach_controller", 00:07:42.664 "bdev_nvme_get_controllers", 00:07:42.664 "bdev_nvme_attach_controller", 00:07:42.664 "bdev_nvme_set_hotplug", 00:07:42.664 "bdev_nvme_set_options", 00:07:42.664 "bdev_passthru_delete", 00:07:42.664 "bdev_passthru_create", 00:07:42.664 "bdev_lvol_set_parent_bdev", 00:07:42.664 "bdev_lvol_set_parent", 00:07:42.664 "bdev_lvol_check_shallow_copy", 00:07:42.664 "bdev_lvol_start_shallow_copy", 00:07:42.664 "bdev_lvol_grow_lvstore", 00:07:42.664 "bdev_lvol_get_lvols", 00:07:42.664 "bdev_lvol_get_lvstores", 00:07:42.664 "bdev_lvol_delete", 00:07:42.664 "bdev_lvol_set_read_only", 00:07:42.664 "bdev_lvol_resize", 00:07:42.664 "bdev_lvol_decouple_parent", 00:07:42.664 "bdev_lvol_inflate", 00:07:42.664 "bdev_lvol_rename", 00:07:42.664 "bdev_lvol_clone_bdev", 00:07:42.664 "bdev_lvol_clone", 00:07:42.664 "bdev_lvol_snapshot", 00:07:42.664 "bdev_lvol_create", 00:07:42.664 "bdev_lvol_delete_lvstore", 00:07:42.664 "bdev_lvol_rename_lvstore", 00:07:42.664 "bdev_lvol_create_lvstore", 00:07:42.664 "bdev_raid_set_options", 00:07:42.664 "bdev_raid_remove_base_bdev", 00:07:42.664 "bdev_raid_add_base_bdev", 00:07:42.664 "bdev_raid_delete", 00:07:42.664 "bdev_raid_create", 00:07:42.664 "bdev_raid_get_bdevs", 00:07:42.664 "bdev_error_inject_error", 00:07:42.664 "bdev_error_delete", 00:07:42.664 "bdev_error_create", 00:07:42.664 "bdev_split_delete", 00:07:42.664 "bdev_split_create", 00:07:42.664 "bdev_delay_delete", 00:07:42.664 "bdev_delay_create", 00:07:42.664 "bdev_delay_update_latency", 00:07:42.664 "bdev_zone_block_delete", 00:07:42.664 "bdev_zone_block_create", 00:07:42.664 "blobfs_create", 00:07:42.664 "blobfs_detect", 00:07:42.664 "blobfs_set_cache_size", 00:07:42.664 "bdev_aio_delete", 00:07:42.664 "bdev_aio_rescan", 00:07:42.664 "bdev_aio_create", 00:07:42.664 "bdev_ftl_set_property", 00:07:42.664 "bdev_ftl_get_properties", 00:07:42.664 "bdev_ftl_get_stats", 00:07:42.664 "bdev_ftl_unmap", 00:07:42.664 "bdev_ftl_unload", 00:07:42.664 "bdev_ftl_delete", 00:07:42.664 "bdev_ftl_load", 00:07:42.664 "bdev_ftl_create", 00:07:42.664 "bdev_virtio_attach_controller", 00:07:42.664 "bdev_virtio_scsi_get_devices", 00:07:42.664 "bdev_virtio_detach_controller", 00:07:42.664 "bdev_virtio_blk_set_hotplug", 00:07:42.664 "bdev_iscsi_delete", 00:07:42.664 "bdev_iscsi_create", 00:07:42.664 "bdev_iscsi_set_options", 00:07:42.665 "accel_error_inject_error", 00:07:42.665 "ioat_scan_accel_module", 00:07:42.665 "dsa_scan_accel_module", 00:07:42.665 "iaa_scan_accel_module", 00:07:42.665 "vfu_virtio_create_fs_endpoint", 00:07:42.665 "vfu_virtio_create_scsi_endpoint", 00:07:42.665 "vfu_virtio_scsi_remove_target", 00:07:42.665 "vfu_virtio_scsi_add_target", 00:07:42.665 "vfu_virtio_create_blk_endpoint", 00:07:42.665 "vfu_virtio_delete_endpoint", 00:07:42.665 "keyring_file_remove_key", 00:07:42.665 "keyring_file_add_key", 00:07:42.665 "keyring_linux_set_options", 00:07:42.665 "fsdev_aio_delete", 00:07:42.665 "fsdev_aio_create", 00:07:42.665 "iscsi_get_histogram", 00:07:42.665 "iscsi_enable_histogram", 00:07:42.665 "iscsi_set_options", 00:07:42.665 "iscsi_get_auth_groups", 00:07:42.665 "iscsi_auth_group_remove_secret", 00:07:42.665 "iscsi_auth_group_add_secret", 00:07:42.665 "iscsi_delete_auth_group", 00:07:42.665 "iscsi_create_auth_group", 00:07:42.665 "iscsi_set_discovery_auth", 00:07:42.665 "iscsi_get_options", 00:07:42.665 "iscsi_target_node_request_logout", 00:07:42.665 "iscsi_target_node_set_redirect", 00:07:42.665 "iscsi_target_node_set_auth", 00:07:42.665 "iscsi_target_node_add_lun", 00:07:42.665 "iscsi_get_stats", 00:07:42.665 "iscsi_get_connections", 00:07:42.665 "iscsi_portal_group_set_auth", 00:07:42.665 "iscsi_start_portal_group", 00:07:42.665 "iscsi_delete_portal_group", 00:07:42.665 "iscsi_create_portal_group", 00:07:42.665 "iscsi_get_portal_groups", 00:07:42.665 "iscsi_delete_target_node", 00:07:42.665 "iscsi_target_node_remove_pg_ig_maps", 00:07:42.665 "iscsi_target_node_add_pg_ig_maps", 00:07:42.665 "iscsi_create_target_node", 00:07:42.665 "iscsi_get_target_nodes", 00:07:42.665 "iscsi_delete_initiator_group", 00:07:42.665 "iscsi_initiator_group_remove_initiators", 00:07:42.665 "iscsi_initiator_group_add_initiators", 00:07:42.665 "iscsi_create_initiator_group", 00:07:42.665 "iscsi_get_initiator_groups", 00:07:42.665 "nvmf_set_crdt", 00:07:42.665 "nvmf_set_config", 00:07:42.665 "nvmf_set_max_subsystems", 00:07:42.665 "nvmf_stop_mdns_prr", 00:07:42.665 "nvmf_publish_mdns_prr", 00:07:42.665 "nvmf_subsystem_get_listeners", 00:07:42.665 "nvmf_subsystem_get_qpairs", 00:07:42.665 "nvmf_subsystem_get_controllers", 00:07:42.665 "nvmf_get_stats", 00:07:42.665 "nvmf_get_transports", 00:07:42.665 "nvmf_create_transport", 00:07:42.665 "nvmf_get_targets", 00:07:42.665 "nvmf_delete_target", 00:07:42.665 "nvmf_create_target", 00:07:42.665 "nvmf_subsystem_allow_any_host", 00:07:42.665 "nvmf_subsystem_set_keys", 00:07:42.665 "nvmf_subsystem_remove_host", 00:07:42.665 "nvmf_subsystem_add_host", 00:07:42.665 "nvmf_ns_remove_host", 00:07:42.665 "nvmf_ns_add_host", 00:07:42.665 "nvmf_subsystem_remove_ns", 00:07:42.665 "nvmf_subsystem_set_ns_ana_group", 00:07:42.665 "nvmf_subsystem_add_ns", 00:07:42.665 "nvmf_subsystem_listener_set_ana_state", 00:07:42.665 "nvmf_discovery_get_referrals", 00:07:42.665 "nvmf_discovery_remove_referral", 00:07:42.665 "nvmf_discovery_add_referral", 00:07:42.665 "nvmf_subsystem_remove_listener", 00:07:42.665 "nvmf_subsystem_add_listener", 00:07:42.665 "nvmf_delete_subsystem", 00:07:42.665 "nvmf_create_subsystem", 00:07:42.665 "nvmf_get_subsystems", 00:07:42.665 "env_dpdk_get_mem_stats", 00:07:42.665 "nbd_get_disks", 00:07:42.665 "nbd_stop_disk", 00:07:42.665 "nbd_start_disk", 00:07:42.665 "ublk_recover_disk", 00:07:42.665 "ublk_get_disks", 00:07:42.665 "ublk_stop_disk", 00:07:42.665 "ublk_start_disk", 00:07:42.665 "ublk_destroy_target", 00:07:42.665 "ublk_create_target", 00:07:42.665 "virtio_blk_create_transport", 00:07:42.665 "virtio_blk_get_transports", 00:07:42.665 "vhost_controller_set_coalescing", 00:07:42.665 "vhost_get_controllers", 00:07:42.665 "vhost_delete_controller", 00:07:42.665 "vhost_create_blk_controller", 00:07:42.665 "vhost_scsi_controller_remove_target", 00:07:42.665 "vhost_scsi_controller_add_target", 00:07:42.665 "vhost_start_scsi_controller", 00:07:42.665 "vhost_create_scsi_controller", 00:07:42.665 "thread_set_cpumask", 00:07:42.665 "scheduler_set_options", 00:07:42.665 "framework_get_governor", 00:07:42.665 "framework_get_scheduler", 00:07:42.665 "framework_set_scheduler", 00:07:42.665 "framework_get_reactors", 00:07:42.665 "thread_get_io_channels", 00:07:42.665 "thread_get_pollers", 00:07:42.665 "thread_get_stats", 00:07:42.665 "framework_monitor_context_switch", 00:07:42.665 "spdk_kill_instance", 00:07:42.665 "log_enable_timestamps", 00:07:42.665 "log_get_flags", 00:07:42.665 "log_clear_flag", 00:07:42.665 "log_set_flag", 00:07:42.665 "log_get_level", 00:07:42.665 "log_set_level", 00:07:42.665 "log_get_print_level", 00:07:42.665 "log_set_print_level", 00:07:42.665 "framework_enable_cpumask_locks", 00:07:42.665 "framework_disable_cpumask_locks", 00:07:42.665 "framework_wait_init", 00:07:42.665 "framework_start_init", 00:07:42.665 "scsi_get_devices", 00:07:42.665 "bdev_get_histogram", 00:07:42.665 "bdev_enable_histogram", 00:07:42.665 "bdev_set_qos_limit", 00:07:42.665 "bdev_set_qd_sampling_period", 00:07:42.665 "bdev_get_bdevs", 00:07:42.665 "bdev_reset_iostat", 00:07:42.665 "bdev_get_iostat", 00:07:42.665 "bdev_examine", 00:07:42.665 "bdev_wait_for_examine", 00:07:42.665 "bdev_set_options", 00:07:42.665 "accel_get_stats", 00:07:42.665 "accel_set_options", 00:07:42.665 "accel_set_driver", 00:07:42.665 "accel_crypto_key_destroy", 00:07:42.665 "accel_crypto_keys_get", 00:07:42.665 "accel_crypto_key_create", 00:07:42.665 "accel_assign_opc", 00:07:42.665 "accel_get_module_info", 00:07:42.665 "accel_get_opc_assignments", 00:07:42.665 "vmd_rescan", 00:07:42.665 "vmd_remove_device", 00:07:42.665 "vmd_enable", 00:07:42.665 "sock_get_default_impl", 00:07:42.665 "sock_set_default_impl", 00:07:42.665 "sock_impl_set_options", 00:07:42.665 "sock_impl_get_options", 00:07:42.665 "iobuf_get_stats", 00:07:42.665 "iobuf_set_options", 00:07:42.665 "keyring_get_keys", 00:07:42.665 "vfu_tgt_set_base_path", 00:07:42.665 "framework_get_pci_devices", 00:07:42.665 "framework_get_config", 00:07:42.665 "framework_get_subsystems", 00:07:42.665 "fsdev_set_opts", 00:07:42.665 "fsdev_get_opts", 00:07:42.665 "trace_get_info", 00:07:42.665 "trace_get_tpoint_group_mask", 00:07:42.665 "trace_disable_tpoint_group", 00:07:42.665 "trace_enable_tpoint_group", 00:07:42.665 "trace_clear_tpoint_mask", 00:07:42.665 "trace_set_tpoint_mask", 00:07:42.665 "notify_get_notifications", 00:07:42.665 "notify_get_types", 00:07:42.665 "spdk_get_version", 00:07:42.665 "rpc_get_methods" 00:07:42.665 ] 00:07:42.665 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.665 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:42.665 08:43:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 705615 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 705615 ']' 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 705615 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705615 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705615' 00:07:42.665 killing process with pid 705615 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 705615 00:07:42.665 08:43:55 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 705615 00:07:43.232 00:07:43.232 real 0m1.361s 00:07:43.232 user 0m2.452s 00:07:43.232 sys 0m0.455s 00:07:43.232 08:43:56 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.232 08:43:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.232 ************************************ 00:07:43.232 END TEST spdkcli_tcp 00:07:43.232 ************************************ 00:07:43.232 08:43:56 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:43.233 08:43:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.233 08:43:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.233 08:43:56 -- common/autotest_common.sh@10 -- # set +x 00:07:43.233 ************************************ 00:07:43.233 START TEST dpdk_mem_utility 00:07:43.233 ************************************ 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:43.233 * Looking for test storage... 00:07:43.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lcov --version 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.233 08:43:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.233 --rc genhtml_branch_coverage=1 00:07:43.233 --rc genhtml_function_coverage=1 00:07:43.233 --rc genhtml_legend=1 00:07:43.233 --rc geninfo_all_blocks=1 00:07:43.233 --rc geninfo_unexecuted_blocks=1 00:07:43.233 00:07:43.233 ' 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.233 --rc genhtml_branch_coverage=1 00:07:43.233 --rc genhtml_function_coverage=1 00:07:43.233 --rc genhtml_legend=1 00:07:43.233 --rc geninfo_all_blocks=1 00:07:43.233 --rc geninfo_unexecuted_blocks=1 00:07:43.233 00:07:43.233 ' 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.233 --rc genhtml_branch_coverage=1 00:07:43.233 --rc genhtml_function_coverage=1 00:07:43.233 --rc genhtml_legend=1 00:07:43.233 --rc geninfo_all_blocks=1 00:07:43.233 --rc geninfo_unexecuted_blocks=1 00:07:43.233 00:07:43.233 ' 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:43.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.233 --rc genhtml_branch_coverage=1 00:07:43.233 --rc genhtml_function_coverage=1 00:07:43.233 --rc genhtml_legend=1 00:07:43.233 --rc geninfo_all_blocks=1 00:07:43.233 --rc geninfo_unexecuted_blocks=1 00:07:43.233 00:07:43.233 ' 00:07:43.233 08:43:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:43.233 08:43:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=705833 00:07:43.233 08:43:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:43.233 08:43:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 705833 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 705833 ']' 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.233 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:43.233 [2024-11-06 08:43:56.506932] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:43.233 [2024-11-06 08:43:56.507030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705833 ] 00:07:43.491 [2024-11-06 08:43:56.576652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.491 [2024-11-06 08:43:56.637118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.750 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.750 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:43.750 08:43:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:43.750 08:43:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:43.750 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.750 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:43.750 { 00:07:43.750 "filename": "/tmp/spdk_mem_dump.txt" 00:07:43.750 } 00:07:43.750 08:43:56 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.750 08:43:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:43.750 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:43.750 1 heaps totaling size 818.000000 MiB 00:07:43.750 size: 818.000000 MiB heap id: 0 00:07:43.750 end heaps---------- 00:07:43.750 9 mempools totaling size 603.782043 MiB 00:07:43.750 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:43.750 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:43.750 size: 100.555481 MiB name: bdev_io_705833 00:07:43.750 size: 50.003479 MiB name: msgpool_705833 00:07:43.750 size: 36.509338 MiB name: fsdev_io_705833 00:07:43.750 size: 21.763794 MiB name: PDU_Pool 00:07:43.750 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:43.750 size: 4.133484 MiB name: evtpool_705833 00:07:43.750 size: 0.026123 MiB name: Session_Pool 00:07:43.750 end mempools------- 00:07:43.750 6 memzones totaling size 4.142822 MiB 00:07:43.750 size: 1.000366 MiB name: RG_ring_0_705833 00:07:43.750 size: 1.000366 MiB name: RG_ring_1_705833 00:07:43.750 size: 1.000366 MiB name: RG_ring_4_705833 00:07:43.750 size: 1.000366 MiB name: RG_ring_5_705833 00:07:43.750 size: 0.125366 MiB name: RG_ring_2_705833 00:07:43.750 size: 0.015991 MiB name: RG_ring_3_705833 00:07:43.750 end memzones------- 00:07:43.750 08:43:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:43.750 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:43.750 list of free elements. size: 10.852478 MiB 00:07:43.750 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:43.750 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:43.750 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:43.750 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:43.750 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:43.750 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:43.750 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:43.750 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:43.750 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:07:43.750 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:43.750 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:43.750 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:43.750 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:43.750 element at address: 0x200028200000 with size: 0.410034 MiB 00:07:43.750 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:43.750 list of standard malloc elements. size: 199.218628 MiB 00:07:43.750 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:43.750 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:43.750 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:43.750 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:43.750 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:43.750 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:43.750 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:43.750 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:43.750 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:43.750 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:43.750 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:43.750 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:43.750 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:43.750 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:43.750 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:43.750 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:43.750 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:43.750 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:43.750 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:43.750 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:43.750 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:43.750 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:43.750 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:43.751 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:43.751 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:43.751 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:43.751 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:43.751 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:43.751 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:43.751 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:43.751 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:43.751 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:43.751 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:43.751 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:43.751 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:43.751 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:43.751 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:43.751 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:43.751 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:43.751 element at address: 0x200028268f80 with size: 0.000183 MiB 00:07:43.751 element at address: 0x200028269040 with size: 0.000183 MiB 00:07:43.751 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:07:43.751 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:43.751 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:43.751 list of memzone associated elements. size: 607.928894 MiB 00:07:43.751 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:43.751 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:43.751 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:43.751 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:43.751 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:43.751 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_705833_0 00:07:43.751 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:43.751 associated memzone info: size: 48.002930 MiB name: MP_msgpool_705833_0 00:07:43.751 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:43.751 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_705833_0 00:07:43.751 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:43.751 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:43.751 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:43.751 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:43.751 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:43.751 associated memzone info: size: 3.000122 MiB name: MP_evtpool_705833_0 00:07:43.751 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:43.751 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_705833 00:07:43.751 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:43.751 associated memzone info: size: 1.007996 MiB name: MP_evtpool_705833 00:07:43.751 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:43.751 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:43.751 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:43.751 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:43.751 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:43.751 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:43.751 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:43.751 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:43.751 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:43.751 associated memzone info: size: 1.000366 MiB name: RG_ring_0_705833 00:07:43.751 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:43.751 associated memzone info: size: 1.000366 MiB name: RG_ring_1_705833 00:07:43.751 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:43.751 associated memzone info: size: 1.000366 MiB name: RG_ring_4_705833 00:07:43.751 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:43.751 associated memzone info: size: 1.000366 MiB name: RG_ring_5_705833 00:07:43.751 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:43.751 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_705833 00:07:43.751 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:43.751 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_705833 00:07:43.751 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:43.751 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:43.751 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:43.751 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:43.751 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:43.751 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:43.751 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:43.751 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_705833 00:07:43.751 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:43.751 associated memzone info: size: 0.125366 MiB name: RG_ring_2_705833 00:07:43.751 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:43.751 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:43.751 element at address: 0x200028269100 with size: 0.023743 MiB 00:07:43.751 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:43.751 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:43.751 associated memzone info: size: 0.015991 MiB name: RG_ring_3_705833 00:07:43.751 element at address: 0x20002826f240 with size: 0.002441 MiB 00:07:43.751 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:43.751 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:43.751 associated memzone info: size: 0.000183 MiB name: MP_msgpool_705833 00:07:43.751 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:43.751 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_705833 00:07:43.751 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:43.751 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_705833 00:07:43.751 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:07:43.751 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:43.751 08:43:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:43.751 08:43:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 705833 00:07:43.751 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 705833 ']' 00:07:43.751 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 705833 00:07:43.751 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:43.751 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.751 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705833 00:07:44.009 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.009 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.009 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705833' 00:07:44.009 killing process with pid 705833 00:07:44.009 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 705833 00:07:44.009 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 705833 00:07:44.267 00:07:44.267 real 0m1.174s 00:07:44.267 user 0m1.171s 00:07:44.267 sys 0m0.411s 00:07:44.267 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.267 08:43:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:44.267 ************************************ 00:07:44.267 END TEST dpdk_mem_utility 00:07:44.267 ************************************ 00:07:44.267 08:43:57 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:44.267 08:43:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.267 08:43:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.267 08:43:57 -- common/autotest_common.sh@10 -- # set +x 00:07:44.267 ************************************ 00:07:44.267 START TEST event 00:07:44.267 ************************************ 00:07:44.267 08:43:57 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:44.525 * Looking for test storage... 00:07:44.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:44.525 08:43:57 event -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:44.525 08:43:57 event -- common/autotest_common.sh@1689 -- # lcov --version 00:07:44.525 08:43:57 event -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:44.525 08:43:57 event -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:44.525 08:43:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.525 08:43:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.525 08:43:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.525 08:43:57 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.525 08:43:57 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.525 08:43:57 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.525 08:43:57 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.525 08:43:57 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.525 08:43:57 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.525 08:43:57 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.525 08:43:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.525 08:43:57 event -- scripts/common.sh@344 -- # case "$op" in 00:07:44.525 08:43:57 event -- scripts/common.sh@345 -- # : 1 00:07:44.525 08:43:57 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.525 08:43:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.525 08:43:57 event -- scripts/common.sh@365 -- # decimal 1 00:07:44.525 08:43:57 event -- scripts/common.sh@353 -- # local d=1 00:07:44.525 08:43:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.525 08:43:57 event -- scripts/common.sh@355 -- # echo 1 00:07:44.525 08:43:57 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.525 08:43:57 event -- scripts/common.sh@366 -- # decimal 2 00:07:44.525 08:43:57 event -- scripts/common.sh@353 -- # local d=2 00:07:44.525 08:43:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.525 08:43:57 event -- scripts/common.sh@355 -- # echo 2 00:07:44.525 08:43:57 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.525 08:43:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.525 08:43:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.525 08:43:57 event -- scripts/common.sh@368 -- # return 0 00:07:44.525 08:43:57 event -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.525 08:43:57 event -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.525 --rc genhtml_branch_coverage=1 00:07:44.525 --rc genhtml_function_coverage=1 00:07:44.525 --rc genhtml_legend=1 00:07:44.525 --rc geninfo_all_blocks=1 00:07:44.525 --rc geninfo_unexecuted_blocks=1 00:07:44.525 00:07:44.525 ' 00:07:44.525 08:43:57 event -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.525 --rc genhtml_branch_coverage=1 00:07:44.525 --rc genhtml_function_coverage=1 00:07:44.525 --rc genhtml_legend=1 00:07:44.525 --rc geninfo_all_blocks=1 00:07:44.525 --rc geninfo_unexecuted_blocks=1 00:07:44.525 00:07:44.525 ' 00:07:44.525 08:43:57 event -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.525 --rc genhtml_branch_coverage=1 00:07:44.525 --rc genhtml_function_coverage=1 00:07:44.525 --rc genhtml_legend=1 00:07:44.525 --rc geninfo_all_blocks=1 00:07:44.525 --rc geninfo_unexecuted_blocks=1 00:07:44.525 00:07:44.525 ' 00:07:44.525 08:43:57 event -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:44.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.525 --rc genhtml_branch_coverage=1 00:07:44.525 --rc genhtml_function_coverage=1 00:07:44.525 --rc genhtml_legend=1 00:07:44.525 --rc geninfo_all_blocks=1 00:07:44.526 --rc geninfo_unexecuted_blocks=1 00:07:44.526 00:07:44.526 ' 00:07:44.526 08:43:57 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:44.526 08:43:57 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:44.526 08:43:57 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:44.526 08:43:57 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:44.526 08:43:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.526 08:43:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:44.526 ************************************ 00:07:44.526 START TEST event_perf 00:07:44.526 ************************************ 00:07:44.526 08:43:57 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:44.526 Running I/O for 1 seconds...[2024-11-06 08:43:57.703418] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:44.526 [2024-11-06 08:43:57.703468] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706032 ] 00:07:44.526 [2024-11-06 08:43:57.772001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.793 [2024-11-06 08:43:57.835068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.793 [2024-11-06 08:43:57.835088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.793 [2024-11-06 08:43:57.835145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.793 [2024-11-06 08:43:57.835149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.731 Running I/O for 1 seconds... 00:07:45.731 lcore 0: 224144 00:07:45.731 lcore 1: 224144 00:07:45.731 lcore 2: 224143 00:07:45.731 lcore 3: 224144 00:07:45.731 done. 00:07:45.731 00:07:45.731 real 0m1.205s 00:07:45.731 user 0m4.134s 00:07:45.731 sys 0m0.064s 00:07:45.731 08:43:58 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.731 08:43:58 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:45.731 ************************************ 00:07:45.731 END TEST event_perf 00:07:45.731 ************************************ 00:07:45.731 08:43:58 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:45.731 08:43:58 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:45.731 08:43:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.731 08:43:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.731 ************************************ 00:07:45.731 START TEST event_reactor 00:07:45.731 ************************************ 00:07:45.731 08:43:58 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:45.731 [2024-11-06 08:43:58.960080] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:45.731 [2024-11-06 08:43:58.960168] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706236 ] 00:07:45.989 [2024-11-06 08:43:59.028964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.989 [2024-11-06 08:43:59.085566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.929 test_start 00:07:46.929 oneshot 00:07:46.929 tick 100 00:07:46.929 tick 100 00:07:46.929 tick 250 00:07:46.929 tick 100 00:07:46.930 tick 100 00:07:46.930 tick 100 00:07:46.930 tick 250 00:07:46.930 tick 500 00:07:46.930 tick 100 00:07:46.930 tick 100 00:07:46.930 tick 250 00:07:46.930 tick 100 00:07:46.930 tick 100 00:07:46.930 test_end 00:07:46.930 00:07:46.930 real 0m1.203s 00:07:46.930 user 0m1.129s 00:07:46.930 sys 0m0.070s 00:07:46.930 08:44:00 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.930 08:44:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:46.930 ************************************ 00:07:46.930 END TEST event_reactor 00:07:46.930 ************************************ 00:07:46.930 08:44:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:46.930 08:44:00 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:46.930 08:44:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.930 08:44:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.930 ************************************ 00:07:46.930 START TEST event_reactor_perf 00:07:46.930 ************************************ 00:07:46.930 08:44:00 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:46.930 [2024-11-06 08:44:00.212405] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:46.930 [2024-11-06 08:44:00.212470] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706456 ] 00:07:47.188 [2024-11-06 08:44:00.280646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.188 [2024-11-06 08:44:00.338614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.150 test_start 00:07:48.150 test_end 00:07:48.150 Performance: 438121 events per second 00:07:48.150 00:07:48.150 real 0m1.200s 00:07:48.150 user 0m1.126s 00:07:48.150 sys 0m0.070s 00:07:48.150 08:44:01 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.150 08:44:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:48.150 ************************************ 00:07:48.150 END TEST event_reactor_perf 00:07:48.150 ************************************ 00:07:48.150 08:44:01 event -- event/event.sh@49 -- # uname -s 00:07:48.447 08:44:01 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:48.447 08:44:01 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:48.447 08:44:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.447 08:44:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.447 08:44:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:48.447 ************************************ 00:07:48.447 START TEST event_scheduler 00:07:48.447 ************************************ 00:07:48.447 08:44:01 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:48.447 * Looking for test storage... 00:07:48.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:48.447 08:44:01 event.event_scheduler -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:07:48.447 08:44:01 event.event_scheduler -- common/autotest_common.sh@1689 -- # lcov --version 00:07:48.447 08:44:01 event.event_scheduler -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:07:48.447 08:44:01 event.event_scheduler -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.447 08:44:01 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:48.447 08:44:01 event.event_scheduler -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.447 08:44:01 event.event_scheduler -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:07:48.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.447 --rc genhtml_branch_coverage=1 00:07:48.447 --rc genhtml_function_coverage=1 00:07:48.448 --rc genhtml_legend=1 00:07:48.448 --rc geninfo_all_blocks=1 00:07:48.448 --rc geninfo_unexecuted_blocks=1 00:07:48.448 00:07:48.448 ' 00:07:48.448 08:44:01 event.event_scheduler -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:07:48.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.448 --rc genhtml_branch_coverage=1 00:07:48.448 --rc genhtml_function_coverage=1 00:07:48.448 --rc genhtml_legend=1 00:07:48.448 --rc geninfo_all_blocks=1 00:07:48.448 --rc geninfo_unexecuted_blocks=1 00:07:48.448 00:07:48.448 ' 00:07:48.448 08:44:01 event.event_scheduler -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:07:48.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.448 --rc genhtml_branch_coverage=1 00:07:48.448 --rc genhtml_function_coverage=1 00:07:48.448 --rc genhtml_legend=1 00:07:48.448 --rc geninfo_all_blocks=1 00:07:48.448 --rc geninfo_unexecuted_blocks=1 00:07:48.448 00:07:48.448 ' 00:07:48.448 08:44:01 event.event_scheduler -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:07:48.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.448 --rc genhtml_branch_coverage=1 00:07:48.448 --rc genhtml_function_coverage=1 00:07:48.448 --rc genhtml_legend=1 00:07:48.448 --rc geninfo_all_blocks=1 00:07:48.448 --rc geninfo_unexecuted_blocks=1 00:07:48.448 00:07:48.448 ' 00:07:48.448 08:44:01 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:48.448 08:44:01 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=706656 00:07:48.448 08:44:01 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:48.448 08:44:01 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:48.448 08:44:01 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 706656 00:07:48.448 08:44:01 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 706656 ']' 00:07:48.448 08:44:01 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.448 08:44:01 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.448 08:44:01 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.448 08:44:01 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.448 08:44:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.448 [2024-11-06 08:44:01.636408] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:48.448 [2024-11-06 08:44:01.636505] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706656 ] 00:07:48.448 [2024-11-06 08:44:01.702305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.723 [2024-11-06 08:44:01.765977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.723 [2024-11-06 08:44:01.766031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.723 [2024-11-06 08:44:01.766096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.723 [2024-11-06 08:44:01.766099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:48.723 08:44:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.723 [2024-11-06 08:44:01.866971] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:48.723 [2024-11-06 08:44:01.867000] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:48.723 [2024-11-06 08:44:01.867019] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:48.723 [2024-11-06 08:44:01.867031] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:48.723 [2024-11-06 08:44:01.867041] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.723 08:44:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.723 [2024-11-06 08:44:01.970642] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.723 08:44:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.723 08:44:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.723 ************************************ 00:07:48.723 START TEST scheduler_create_thread 00:07:48.723 ************************************ 00:07:48.723 08:44:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:48.723 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:48.723 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.723 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.723 2 00:07:48.723 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.723 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:48.723 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.723 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 3 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 4 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 5 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 6 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 7 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 8 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 9 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 10 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.981 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.547 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.547 00:07:49.547 real 0m0.589s 00:07:49.547 user 0m0.009s 00:07:49.547 sys 0m0.005s 00:07:49.547 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.547 08:44:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.547 ************************************ 00:07:49.548 END TEST scheduler_create_thread 00:07:49.548 ************************************ 00:07:49.548 08:44:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:49.548 08:44:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 706656 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 706656 ']' 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 706656 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 706656 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 706656' 00:07:49.548 killing process with pid 706656 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 706656 00:07:49.548 08:44:02 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 706656 00:07:49.806 [2024-11-06 08:44:03.070883] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:50.064 00:07:50.064 real 0m1.850s 00:07:50.064 user 0m2.500s 00:07:50.064 sys 0m0.356s 00:07:50.064 08:44:03 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.064 08:44:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:50.064 ************************************ 00:07:50.064 END TEST event_scheduler 00:07:50.064 ************************************ 00:07:50.064 08:44:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:50.064 08:44:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:50.064 08:44:03 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.064 08:44:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.064 08:44:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:50.064 ************************************ 00:07:50.064 START TEST app_repeat 00:07:50.064 ************************************ 00:07:50.064 08:44:03 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:50.064 08:44:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.064 08:44:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.064 08:44:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:50.064 08:44:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:50.064 08:44:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:50.064 08:44:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:50.064 08:44:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:50.323 08:44:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=706855 00:07:50.323 08:44:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:50.323 08:44:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.323 08:44:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 706855' 00:07:50.323 Process app_repeat pid: 706855 00:07:50.323 08:44:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:50.323 08:44:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:50.323 spdk_app_start Round 0 00:07:50.323 08:44:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 706855 /var/tmp/spdk-nbd.sock 00:07:50.323 08:44:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 706855 ']' 00:07:50.323 08:44:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:50.323 08:44:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.323 08:44:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:50.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:50.323 08:44:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.323 08:44:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:50.323 [2024-11-06 08:44:03.376398] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:07:50.323 [2024-11-06 08:44:03.376462] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid706855 ] 00:07:50.323 [2024-11-06 08:44:03.443363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:50.323 [2024-11-06 08:44:03.507300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.323 [2024-11-06 08:44:03.507305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.581 08:44:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.581 08:44:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:50.581 08:44:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:50.839 Malloc0 00:07:50.839 08:44:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:51.097 Malloc1 00:07:51.097 08:44:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.097 08:44:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:51.355 /dev/nbd0 00:07:51.355 08:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:51.355 08:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:51.355 1+0 records in 00:07:51.355 1+0 records out 00:07:51.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251509 s, 16.3 MB/s 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:51.355 08:44:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:51.355 08:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.355 08:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.355 08:44:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:51.613 /dev/nbd1 00:07:51.613 08:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:51.613 08:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:51.613 1+0 records in 00:07:51.613 1+0 records out 00:07:51.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162429 s, 25.2 MB/s 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:51.613 08:44:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:51.613 08:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.613 08:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:51.613 08:44:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.613 08:44:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.613 08:44:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.870 08:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:51.870 { 00:07:51.870 "nbd_device": "/dev/nbd0", 00:07:51.870 "bdev_name": "Malloc0" 00:07:51.870 }, 00:07:51.870 { 00:07:51.870 "nbd_device": "/dev/nbd1", 00:07:51.870 "bdev_name": "Malloc1" 00:07:51.870 } 00:07:51.870 ]' 00:07:51.870 08:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:51.870 { 00:07:51.870 "nbd_device": "/dev/nbd0", 00:07:51.870 "bdev_name": "Malloc0" 00:07:51.870 }, 00:07:51.870 { 00:07:51.870 "nbd_device": "/dev/nbd1", 00:07:51.870 "bdev_name": "Malloc1" 00:07:51.870 } 00:07:51.870 ]' 00:07:51.870 08:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.128 08:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:52.128 /dev/nbd1' 00:07:52.128 08:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:52.128 /dev/nbd1' 00:07:52.128 08:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.128 08:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:52.129 256+0 records in 00:07:52.129 256+0 records out 00:07:52.129 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511627 s, 205 MB/s 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:52.129 256+0 records in 00:07:52.129 256+0 records out 00:07:52.129 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020087 s, 52.2 MB/s 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:52.129 256+0 records in 00:07:52.129 256+0 records out 00:07:52.129 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220491 s, 47.6 MB/s 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.129 08:44:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.387 08:44:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.644 08:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:52.902 08:44:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:52.902 08:44:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:53.160 08:44:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:53.418 [2024-11-06 08:44:06.659921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:53.677 [2024-11-06 08:44:06.714549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.677 [2024-11-06 08:44:06.714554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.677 [2024-11-06 08:44:06.769561] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:53.677 [2024-11-06 08:44:06.769620] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:56.202 08:44:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:56.202 08:44:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:56.202 spdk_app_start Round 1 00:07:56.202 08:44:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 706855 /var/tmp/spdk-nbd.sock 00:07:56.202 08:44:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 706855 ']' 00:07:56.202 08:44:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:56.202 08:44:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.202 08:44:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:56.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:56.202 08:44:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.202 08:44:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:56.461 08:44:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.461 08:44:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:56.461 08:44:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:56.721 Malloc0 00:07:56.721 08:44:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:57.287 Malloc1 00:07:57.287 08:44:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.287 08:44:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:57.544 /dev/nbd0 00:07:57.544 08:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:57.544 08:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:57.544 1+0 records in 00:07:57.544 1+0 records out 00:07:57.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257192 s, 15.9 MB/s 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.544 08:44:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:57.544 08:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.544 08:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.544 08:44:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:57.803 /dev/nbd1 00:07:57.803 08:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:57.803 08:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:57.803 1+0 records in 00:07:57.803 1+0 records out 00:07:57.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189506 s, 21.6 MB/s 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.803 08:44:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:57.803 08:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.803 08:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:57.803 08:44:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.803 08:44:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.803 08:44:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:58.061 { 00:07:58.061 "nbd_device": "/dev/nbd0", 00:07:58.061 "bdev_name": "Malloc0" 00:07:58.061 }, 00:07:58.061 { 00:07:58.061 "nbd_device": "/dev/nbd1", 00:07:58.061 "bdev_name": "Malloc1" 00:07:58.061 } 00:07:58.061 ]' 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:58.061 { 00:07:58.061 "nbd_device": "/dev/nbd0", 00:07:58.061 "bdev_name": "Malloc0" 00:07:58.061 }, 00:07:58.061 { 00:07:58.061 "nbd_device": "/dev/nbd1", 00:07:58.061 "bdev_name": "Malloc1" 00:07:58.061 } 00:07:58.061 ]' 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:58.061 /dev/nbd1' 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:58.061 /dev/nbd1' 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:58.061 256+0 records in 00:07:58.061 256+0 records out 00:07:58.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420633 s, 249 MB/s 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:58.061 256+0 records in 00:07:58.061 256+0 records out 00:07:58.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218014 s, 48.1 MB/s 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:58.061 256+0 records in 00:07:58.061 256+0 records out 00:07:58.061 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222704 s, 47.1 MB/s 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:58.061 08:44:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.062 08:44:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.627 08:44:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:58.884 08:44:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.885 08:44:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:59.143 08:44:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:59.143 08:44:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:59.400 08:44:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:59.658 [2024-11-06 08:44:12.768272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.658 [2024-11-06 08:44:12.822770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.658 [2024-11-06 08:44:12.822770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.658 [2024-11-06 08:44:12.882894] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:59.658 [2024-11-06 08:44:12.882962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:02.938 08:44:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:02.938 08:44:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:02.938 spdk_app_start Round 2 00:08:02.938 08:44:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 706855 /var/tmp/spdk-nbd.sock 00:08:02.938 08:44:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 706855 ']' 00:08:02.938 08:44:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:02.938 08:44:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:02.938 08:44:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:02.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:02.938 08:44:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:02.938 08:44:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:02.938 08:44:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:02.938 08:44:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:02.938 08:44:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:02.938 Malloc0 00:08:02.938 08:44:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:03.196 Malloc1 00:08:03.196 08:44:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.196 08:44:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:03.454 /dev/nbd0 00:08:03.454 08:44:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:03.454 08:44:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:03.454 1+0 records in 00:08:03.454 1+0 records out 00:08:03.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181251 s, 22.6 MB/s 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:03.454 08:44:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:03.454 08:44:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.454 08:44:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:03.454 08:44:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:04.019 /dev/nbd1 00:08:04.019 08:44:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:04.019 08:44:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:04.019 1+0 records in 00:08:04.019 1+0 records out 00:08:04.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156862 s, 26.1 MB/s 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.019 08:44:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:04.019 08:44:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:04.019 08:44:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:04.019 08:44:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.019 08:44:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.019 08:44:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:04.278 { 00:08:04.278 "nbd_device": "/dev/nbd0", 00:08:04.278 "bdev_name": "Malloc0" 00:08:04.278 }, 00:08:04.278 { 00:08:04.278 "nbd_device": "/dev/nbd1", 00:08:04.278 "bdev_name": "Malloc1" 00:08:04.278 } 00:08:04.278 ]' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:04.278 { 00:08:04.278 "nbd_device": "/dev/nbd0", 00:08:04.278 "bdev_name": "Malloc0" 00:08:04.278 }, 00:08:04.278 { 00:08:04.278 "nbd_device": "/dev/nbd1", 00:08:04.278 "bdev_name": "Malloc1" 00:08:04.278 } 00:08:04.278 ]' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:04.278 /dev/nbd1' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:04.278 /dev/nbd1' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:04.278 256+0 records in 00:08:04.278 256+0 records out 00:08:04.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047744 s, 220 MB/s 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:04.278 256+0 records in 00:08:04.278 256+0 records out 00:08:04.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201686 s, 52.0 MB/s 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:04.278 256+0 records in 00:08:04.278 256+0 records out 00:08:04.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217323 s, 48.2 MB/s 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.278 08:44:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.536 08:44:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.794 08:44:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:05.052 08:44:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:05.052 08:44:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:05.617 08:44:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:05.617 [2024-11-06 08:44:18.841577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:05.617 [2024-11-06 08:44:18.895470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.617 [2024-11-06 08:44:18.895473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.874 [2024-11-06 08:44:18.955722] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:05.874 [2024-11-06 08:44:18.955782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:08.400 08:44:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 706855 /var/tmp/spdk-nbd.sock 00:08:08.400 08:44:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 706855 ']' 00:08:08.400 08:44:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:08.400 08:44:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:08.400 08:44:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:08.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:08.400 08:44:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:08.400 08:44:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:08.658 08:44:21 event.app_repeat -- event/event.sh@39 -- # killprocess 706855 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 706855 ']' 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 706855 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 706855 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 706855' 00:08:08.658 killing process with pid 706855 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@969 -- # kill 706855 00:08:08.658 08:44:21 event.app_repeat -- common/autotest_common.sh@974 -- # wait 706855 00:08:08.917 spdk_app_start is called in Round 0. 00:08:08.917 Shutdown signal received, stop current app iteration 00:08:08.917 Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 reinitialization... 00:08:08.917 spdk_app_start is called in Round 1. 00:08:08.917 Shutdown signal received, stop current app iteration 00:08:08.917 Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 reinitialization... 00:08:08.917 spdk_app_start is called in Round 2. 00:08:08.917 Shutdown signal received, stop current app iteration 00:08:08.917 Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 reinitialization... 00:08:08.917 spdk_app_start is called in Round 3. 00:08:08.917 Shutdown signal received, stop current app iteration 00:08:08.917 08:44:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:08.917 08:44:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:08.917 00:08:08.917 real 0m18.781s 00:08:08.917 user 0m41.541s 00:08:08.917 sys 0m3.266s 00:08:08.917 08:44:22 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.917 08:44:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:08.917 ************************************ 00:08:08.917 END TEST app_repeat 00:08:08.917 ************************************ 00:08:08.917 08:44:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:08.917 08:44:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:08.917 08:44:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.917 08:44:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.917 08:44:22 event -- common/autotest_common.sh@10 -- # set +x 00:08:08.917 ************************************ 00:08:08.917 START TEST cpu_locks 00:08:08.917 ************************************ 00:08:08.917 08:44:22 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:09.176 * Looking for test storage... 00:08:09.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1689 -- # lcov --version 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.176 08:44:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:09.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.176 --rc genhtml_branch_coverage=1 00:08:09.176 --rc genhtml_function_coverage=1 00:08:09.176 --rc genhtml_legend=1 00:08:09.176 --rc geninfo_all_blocks=1 00:08:09.176 --rc geninfo_unexecuted_blocks=1 00:08:09.176 00:08:09.176 ' 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:09.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.176 --rc genhtml_branch_coverage=1 00:08:09.176 --rc genhtml_function_coverage=1 00:08:09.176 --rc genhtml_legend=1 00:08:09.176 --rc geninfo_all_blocks=1 00:08:09.176 --rc geninfo_unexecuted_blocks=1 00:08:09.176 00:08:09.176 ' 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:09.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.176 --rc genhtml_branch_coverage=1 00:08:09.176 --rc genhtml_function_coverage=1 00:08:09.176 --rc genhtml_legend=1 00:08:09.176 --rc geninfo_all_blocks=1 00:08:09.176 --rc geninfo_unexecuted_blocks=1 00:08:09.176 00:08:09.176 ' 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:09.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.176 --rc genhtml_branch_coverage=1 00:08:09.176 --rc genhtml_function_coverage=1 00:08:09.176 --rc genhtml_legend=1 00:08:09.176 --rc geninfo_all_blocks=1 00:08:09.176 --rc geninfo_unexecuted_blocks=1 00:08:09.176 00:08:09.176 ' 00:08:09.176 08:44:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:09.176 08:44:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:09.176 08:44:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:09.176 08:44:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.176 08:44:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.176 ************************************ 00:08:09.176 START TEST default_locks 00:08:09.176 ************************************ 00:08:09.176 08:44:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:09.176 08:44:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=709339 00:08:09.176 08:44:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.176 08:44:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 709339 00:08:09.176 08:44:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 709339 ']' 00:08:09.176 08:44:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.176 08:44:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.176 08:44:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.176 08:44:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.177 08:44:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.177 [2024-11-06 08:44:22.412563] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:09.177 [2024-11-06 08:44:22.412640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709339 ] 00:08:09.434 [2024-11-06 08:44:22.481040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.435 [2024-11-06 08:44:22.539224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.694 08:44:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.694 08:44:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:09.694 08:44:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 709339 00:08:09.694 08:44:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 709339 00:08:09.694 08:44:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:09.952 lslocks: write error 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 709339 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 709339 ']' 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 709339 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 709339 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 709339' 00:08:09.952 killing process with pid 709339 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 709339 00:08:09.952 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 709339 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 709339 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 709339 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 709339 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 709339 ']' 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (709339) - No such process 00:08:10.517 ERROR: process (pid: 709339) is no longer running 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:10.517 00:08:10.517 real 0m1.176s 00:08:10.517 user 0m1.132s 00:08:10.517 sys 0m0.506s 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.517 08:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.517 ************************************ 00:08:10.517 END TEST default_locks 00:08:10.517 ************************************ 00:08:10.517 08:44:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:10.517 08:44:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.517 08:44:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.517 08:44:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.517 ************************************ 00:08:10.517 START TEST default_locks_via_rpc 00:08:10.518 ************************************ 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=709597 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 709597 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 709597 ']' 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.518 08:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.518 [2024-11-06 08:44:23.642471] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:10.518 [2024-11-06 08:44:23.642555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709597 ] 00:08:10.518 [2024-11-06 08:44:23.707606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.518 [2024-11-06 08:44:23.767108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 709597 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 709597 00:08:10.776 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 709597 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 709597 ']' 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 709597 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 709597 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 709597' 00:08:11.034 killing process with pid 709597 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 709597 00:08:11.034 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 709597 00:08:11.599 00:08:11.599 real 0m1.141s 00:08:11.599 user 0m1.106s 00:08:11.599 sys 0m0.498s 00:08:11.599 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.599 08:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 ************************************ 00:08:11.599 END TEST default_locks_via_rpc 00:08:11.599 ************************************ 00:08:11.599 08:44:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:11.599 08:44:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.599 08:44:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.599 08:44:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 ************************************ 00:08:11.599 START TEST non_locking_app_on_locked_coremask 00:08:11.599 ************************************ 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=709784 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 709784 /var/tmp/spdk.sock 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 709784 ']' 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.599 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:11.599 [2024-11-06 08:44:24.832229] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:11.599 [2024-11-06 08:44:24.832314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709784 ] 00:08:11.857 [2024-11-06 08:44:24.898609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.857 [2024-11-06 08:44:24.957968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=709789 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 709789 /var/tmp/spdk2.sock 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 709789 ']' 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:12.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.114 08:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.114 [2024-11-06 08:44:25.279389] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:12.114 [2024-11-06 08:44:25.279461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid709789 ] 00:08:12.114 [2024-11-06 08:44:25.377349] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:12.114 [2024-11-06 08:44:25.377377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.372 [2024-11-06 08:44:25.489572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.306 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.306 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:13.306 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 709784 00:08:13.306 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 709784 00:08:13.306 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:13.563 lslocks: write error 00:08:13.563 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 709784 00:08:13.563 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 709784 ']' 00:08:13.563 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 709784 00:08:13.563 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:13.563 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.563 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 709784 00:08:13.563 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.563 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.564 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 709784' 00:08:13.564 killing process with pid 709784 00:08:13.564 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 709784 00:08:13.564 08:44:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 709784 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 709789 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 709789 ']' 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 709789 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 709789 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 709789' 00:08:14.497 killing process with pid 709789 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 709789 00:08:14.497 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 709789 00:08:14.755 00:08:14.755 real 0m3.210s 00:08:14.755 user 0m3.407s 00:08:14.755 sys 0m1.054s 00:08:14.755 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.755 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:14.755 ************************************ 00:08:14.755 END TEST non_locking_app_on_locked_coremask 00:08:14.755 ************************************ 00:08:14.755 08:44:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:14.755 08:44:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.755 08:44:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.755 08:44:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:14.755 ************************************ 00:08:14.755 START TEST locking_app_on_unlocked_coremask 00:08:14.755 ************************************ 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=710113 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 710113 /var/tmp/spdk.sock 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 710113 ']' 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:14.755 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:15.014 [2024-11-06 08:44:28.098344] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:15.014 [2024-11-06 08:44:28.098410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710113 ] 00:08:15.014 [2024-11-06 08:44:28.164023] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:15.014 [2024-11-06 08:44:28.164052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.014 [2024-11-06 08:44:28.218696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=710221 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 710221 /var/tmp/spdk2.sock 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 710221 ']' 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:15.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.272 08:44:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:15.272 [2024-11-06 08:44:28.533183] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:15.272 [2024-11-06 08:44:28.533254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710221 ] 00:08:15.529 [2024-11-06 08:44:28.631334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.529 [2024-11-06 08:44:28.742472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.462 08:44:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.462 08:44:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:16.462 08:44:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 710221 00:08:16.462 08:44:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 710221 00:08:16.462 08:44:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:17.028 lslocks: write error 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 710113 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 710113 ']' 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 710113 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 710113 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 710113' 00:08:17.028 killing process with pid 710113 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 710113 00:08:17.028 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 710113 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 710221 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 710221 ']' 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 710221 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 710221 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 710221' 00:08:17.960 killing process with pid 710221 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 710221 00:08:17.960 08:44:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 710221 00:08:18.219 00:08:18.219 real 0m3.323s 00:08:18.219 user 0m3.592s 00:08:18.219 sys 0m1.045s 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.219 ************************************ 00:08:18.219 END TEST locking_app_on_unlocked_coremask 00:08:18.219 ************************************ 00:08:18.219 08:44:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:18.219 08:44:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.219 08:44:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.219 08:44:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.219 ************************************ 00:08:18.219 START TEST locking_app_on_locked_coremask 00:08:18.219 ************************************ 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=710537 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 710537 /var/tmp/spdk.sock 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 710537 ']' 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.219 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.219 [2024-11-06 08:44:31.474402] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:18.219 [2024-11-06 08:44:31.474498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710537 ] 00:08:18.478 [2024-11-06 08:44:31.541756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.478 [2024-11-06 08:44:31.598840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=710661 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 710661 /var/tmp/spdk2.sock 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 710661 /var/tmp/spdk2.sock 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 710661 /var/tmp/spdk2.sock 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 710661 ']' 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:18.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:18.736 08:44:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.736 [2024-11-06 08:44:31.911856] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:18.736 [2024-11-06 08:44:31.911952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710661 ] 00:08:18.736 [2024-11-06 08:44:32.009010] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 710537 has claimed it. 00:08:18.736 [2024-11-06 08:44:32.009057] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:19.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (710661) - No such process 00:08:19.669 ERROR: process (pid: 710661) is no longer running 00:08:19.669 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 710537 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 710537 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:19.670 lslocks: write error 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 710537 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 710537 ']' 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 710537 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 710537 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 710537' 00:08:19.670 killing process with pid 710537 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 710537 00:08:19.670 08:44:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 710537 00:08:20.235 00:08:20.235 real 0m1.940s 00:08:20.235 user 0m2.148s 00:08:20.235 sys 0m0.595s 00:08:20.235 08:44:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.235 08:44:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.235 ************************************ 00:08:20.235 END TEST locking_app_on_locked_coremask 00:08:20.235 ************************************ 00:08:20.235 08:44:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:20.235 08:44:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.235 08:44:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.235 08:44:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.235 ************************************ 00:08:20.235 START TEST locking_overlapped_coremask 00:08:20.235 ************************************ 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=710833 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 710833 /var/tmp/spdk.sock 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 710833 ']' 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.235 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.235 [2024-11-06 08:44:33.466513] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:20.235 [2024-11-06 08:44:33.466598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710833 ] 00:08:20.493 [2024-11-06 08:44:33.532600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.493 [2024-11-06 08:44:33.594613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.493 [2024-11-06 08:44:33.594643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.493 [2024-11-06 08:44:33.594645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=710961 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 710961 /var/tmp/spdk2.sock 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 710961 /var/tmp/spdk2.sock 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 710961 /var/tmp/spdk2.sock 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 710961 ']' 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:20.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.751 08:44:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.751 [2024-11-06 08:44:33.934350] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:20.751 [2024-11-06 08:44:33.934449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid710961 ] 00:08:20.752 [2024-11-06 08:44:34.036796] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 710833 has claimed it. 00:08:20.752 [2024-11-06 08:44:34.036874] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:21.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (710961) - No such process 00:08:21.685 ERROR: process (pid: 710961) is no longer running 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 710833 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 710833 ']' 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 710833 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 710833 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 710833' 00:08:21.685 killing process with pid 710833 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 710833 00:08:21.685 08:44:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 710833 00:08:21.945 00:08:21.945 real 0m1.690s 00:08:21.945 user 0m4.768s 00:08:21.945 sys 0m0.445s 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:21.945 ************************************ 00:08:21.945 END TEST locking_overlapped_coremask 00:08:21.945 ************************************ 00:08:21.945 08:44:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:21.945 08:44:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.945 08:44:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.945 08:44:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:21.945 ************************************ 00:08:21.945 START TEST locking_overlapped_coremask_via_rpc 00:08:21.945 ************************************ 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=711123 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 711123 /var/tmp/spdk.sock 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 711123 ']' 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.945 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:21.945 [2024-11-06 08:44:35.207938] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:21.945 [2024-11-06 08:44:35.208049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711123 ] 00:08:22.203 [2024-11-06 08:44:35.271002] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.203 [2024-11-06 08:44:35.271034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.203 [2024-11-06 08:44:35.326961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.203 [2024-11-06 08:44:35.327021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.203 [2024-11-06 08:44:35.327024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.461 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.461 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:22.462 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=711134 00:08:22.462 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:22.462 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 711134 /var/tmp/spdk2.sock 00:08:22.462 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 711134 ']' 00:08:22.462 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:22.462 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.462 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:22.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:22.462 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.462 08:44:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.462 [2024-11-06 08:44:35.656969] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:22.462 [2024-11-06 08:44:35.657065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711134 ] 00:08:22.720 [2024-11-06 08:44:35.764138] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.720 [2024-11-06 08:44:35.764198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.720 [2024-11-06 08:44:35.893453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.720 [2024-11-06 08:44:35.896917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:22.720 [2024-11-06 08:44:35.896920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.655 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.656 [2024-11-06 08:44:36.669937] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 711123 has claimed it. 00:08:23.656 request: 00:08:23.656 { 00:08:23.656 "method": "framework_enable_cpumask_locks", 00:08:23.656 "req_id": 1 00:08:23.656 } 00:08:23.656 Got JSON-RPC error response 00:08:23.656 response: 00:08:23.656 { 00:08:23.656 "code": -32603, 00:08:23.656 "message": "Failed to claim CPU core: 2" 00:08:23.656 } 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 711123 /var/tmp/spdk.sock 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 711123 ']' 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.656 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.914 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:23.914 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:23.914 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 711134 /var/tmp/spdk2.sock 00:08:23.914 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 711134 ']' 00:08:23.914 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:23.914 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.914 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:23.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:23.914 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.914 08:44:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.172 08:44:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.172 08:44:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:24.172 08:44:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:24.172 08:44:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:24.172 08:44:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:24.172 08:44:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:24.172 00:08:24.172 real 0m2.100s 00:08:24.172 user 0m1.173s 00:08:24.172 sys 0m0.169s 00:08:24.172 08:44:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.172 08:44:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.172 ************************************ 00:08:24.172 END TEST locking_overlapped_coremask_via_rpc 00:08:24.172 ************************************ 00:08:24.172 08:44:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:24.172 08:44:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 711123 ]] 00:08:24.172 08:44:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 711123 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 711123 ']' 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 711123 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 711123 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 711123' 00:08:24.172 killing process with pid 711123 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 711123 00:08:24.172 08:44:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 711123 00:08:24.738 08:44:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 711134 ]] 00:08:24.738 08:44:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 711134 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 711134 ']' 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 711134 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 711134 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 711134' 00:08:24.738 killing process with pid 711134 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 711134 00:08:24.738 08:44:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 711134 00:08:24.997 08:44:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:24.997 08:44:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:24.997 08:44:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 711123 ]] 00:08:24.997 08:44:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 711123 00:08:24.997 08:44:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 711123 ']' 00:08:24.997 08:44:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 711123 00:08:24.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (711123) - No such process 00:08:24.997 08:44:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 711123 is not found' 00:08:24.997 Process with pid 711123 is not found 00:08:24.997 08:44:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 711134 ]] 00:08:24.997 08:44:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 711134 00:08:24.997 08:44:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 711134 ']' 00:08:24.997 08:44:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 711134 00:08:24.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (711134) - No such process 00:08:24.997 08:44:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 711134 is not found' 00:08:24.997 Process with pid 711134 is not found 00:08:24.997 08:44:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:24.997 00:08:24.997 real 0m16.039s 00:08:24.997 user 0m29.310s 00:08:24.997 sys 0m5.295s 00:08:24.997 08:44:38 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.997 08:44:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:24.997 ************************************ 00:08:24.997 END TEST cpu_locks 00:08:24.997 ************************************ 00:08:24.997 00:08:24.997 real 0m40.721s 00:08:24.997 user 1m19.934s 00:08:24.997 sys 0m9.391s 00:08:24.997 08:44:38 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.997 08:44:38 event -- common/autotest_common.sh@10 -- # set +x 00:08:24.997 ************************************ 00:08:24.997 END TEST event 00:08:24.997 ************************************ 00:08:24.997 08:44:38 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:24.997 08:44:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.997 08:44:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.997 08:44:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.256 ************************************ 00:08:25.256 START TEST thread 00:08:25.256 ************************************ 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:25.256 * Looking for test storage... 00:08:25.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1689 -- # lcov --version 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:25.256 08:44:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.256 08:44:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.256 08:44:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.256 08:44:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.256 08:44:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.256 08:44:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.256 08:44:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.256 08:44:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.256 08:44:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.256 08:44:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.256 08:44:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.256 08:44:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:25.256 08:44:38 thread -- scripts/common.sh@345 -- # : 1 00:08:25.256 08:44:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.256 08:44:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.256 08:44:38 thread -- scripts/common.sh@365 -- # decimal 1 00:08:25.256 08:44:38 thread -- scripts/common.sh@353 -- # local d=1 00:08:25.256 08:44:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.256 08:44:38 thread -- scripts/common.sh@355 -- # echo 1 00:08:25.256 08:44:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.256 08:44:38 thread -- scripts/common.sh@366 -- # decimal 2 00:08:25.256 08:44:38 thread -- scripts/common.sh@353 -- # local d=2 00:08:25.256 08:44:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.256 08:44:38 thread -- scripts/common.sh@355 -- # echo 2 00:08:25.256 08:44:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.256 08:44:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.256 08:44:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.256 08:44:38 thread -- scripts/common.sh@368 -- # return 0 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:25.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.256 --rc genhtml_branch_coverage=1 00:08:25.256 --rc genhtml_function_coverage=1 00:08:25.256 --rc genhtml_legend=1 00:08:25.256 --rc geninfo_all_blocks=1 00:08:25.256 --rc geninfo_unexecuted_blocks=1 00:08:25.256 00:08:25.256 ' 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:25.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.256 --rc genhtml_branch_coverage=1 00:08:25.256 --rc genhtml_function_coverage=1 00:08:25.256 --rc genhtml_legend=1 00:08:25.256 --rc geninfo_all_blocks=1 00:08:25.256 --rc geninfo_unexecuted_blocks=1 00:08:25.256 00:08:25.256 ' 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:25.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.256 --rc genhtml_branch_coverage=1 00:08:25.256 --rc genhtml_function_coverage=1 00:08:25.256 --rc genhtml_legend=1 00:08:25.256 --rc geninfo_all_blocks=1 00:08:25.256 --rc geninfo_unexecuted_blocks=1 00:08:25.256 00:08:25.256 ' 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:25.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.256 --rc genhtml_branch_coverage=1 00:08:25.256 --rc genhtml_function_coverage=1 00:08:25.256 --rc genhtml_legend=1 00:08:25.256 --rc geninfo_all_blocks=1 00:08:25.256 --rc geninfo_unexecuted_blocks=1 00:08:25.256 00:08:25.256 ' 00:08:25.256 08:44:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.256 08:44:38 thread -- common/autotest_common.sh@10 -- # set +x 00:08:25.256 ************************************ 00:08:25.256 START TEST thread_poller_perf 00:08:25.256 ************************************ 00:08:25.256 08:44:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:25.256 [2024-11-06 08:44:38.484030] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:25.256 [2024-11-06 08:44:38.484098] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711630 ] 00:08:25.514 [2024-11-06 08:44:38.551286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.514 [2024-11-06 08:44:38.606519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.515 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:26.450 [2024-11-06T07:44:39.739Z] ====================================== 00:08:26.450 [2024-11-06T07:44:39.739Z] busy:2712024258 (cyc) 00:08:26.450 [2024-11-06T07:44:39.739Z] total_run_count: 366000 00:08:26.450 [2024-11-06T07:44:39.739Z] tsc_hz: 2700000000 (cyc) 00:08:26.450 [2024-11-06T07:44:39.739Z] ====================================== 00:08:26.450 [2024-11-06T07:44:39.739Z] poller_cost: 7409 (cyc), 2744 (nsec) 00:08:26.450 00:08:26.450 real 0m1.206s 00:08:26.450 user 0m1.137s 00:08:26.450 sys 0m0.064s 00:08:26.450 08:44:39 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.450 08:44:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:26.450 ************************************ 00:08:26.450 END TEST thread_poller_perf 00:08:26.450 ************************************ 00:08:26.450 08:44:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:26.450 08:44:39 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:26.450 08:44:39 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.450 08:44:39 thread -- common/autotest_common.sh@10 -- # set +x 00:08:26.450 ************************************ 00:08:26.450 START TEST thread_poller_perf 00:08:26.450 ************************************ 00:08:26.450 08:44:39 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:26.708 [2024-11-06 08:44:39.740847] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:26.708 [2024-11-06 08:44:39.740917] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711784 ] 00:08:26.708 [2024-11-06 08:44:39.805739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.708 [2024-11-06 08:44:39.862268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.708 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:27.642 [2024-11-06T07:44:40.931Z] ====================================== 00:08:27.642 [2024-11-06T07:44:40.931Z] busy:2702062197 (cyc) 00:08:27.642 [2024-11-06T07:44:40.931Z] total_run_count: 4663000 00:08:27.642 [2024-11-06T07:44:40.931Z] tsc_hz: 2700000000 (cyc) 00:08:27.642 [2024-11-06T07:44:40.931Z] ====================================== 00:08:27.642 [2024-11-06T07:44:40.931Z] poller_cost: 579 (cyc), 214 (nsec) 00:08:27.642 00:08:27.642 real 0m1.198s 00:08:27.642 user 0m1.131s 00:08:27.642 sys 0m0.061s 00:08:27.642 08:44:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.642 08:44:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:27.642 ************************************ 00:08:27.642 END TEST thread_poller_perf 00:08:27.642 ************************************ 00:08:27.900 08:44:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:27.900 00:08:27.900 real 0m2.646s 00:08:27.900 user 0m2.404s 00:08:27.900 sys 0m0.246s 00:08:27.900 08:44:40 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.900 08:44:40 thread -- common/autotest_common.sh@10 -- # set +x 00:08:27.900 ************************************ 00:08:27.900 END TEST thread 00:08:27.900 ************************************ 00:08:27.900 08:44:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:27.900 08:44:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:27.900 08:44:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.900 08:44:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.900 08:44:40 -- common/autotest_common.sh@10 -- # set +x 00:08:27.900 ************************************ 00:08:27.900 START TEST app_cmdline 00:08:27.900 ************************************ 00:08:27.900 08:44:40 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:27.900 * Looking for test storage... 00:08:27.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@1689 -- # lcov --version 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.900 08:44:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:27.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.900 --rc genhtml_branch_coverage=1 00:08:27.900 --rc genhtml_function_coverage=1 00:08:27.900 --rc genhtml_legend=1 00:08:27.900 --rc geninfo_all_blocks=1 00:08:27.900 --rc geninfo_unexecuted_blocks=1 00:08:27.900 00:08:27.900 ' 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:27.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.900 --rc genhtml_branch_coverage=1 00:08:27.900 --rc genhtml_function_coverage=1 00:08:27.900 --rc genhtml_legend=1 00:08:27.900 --rc geninfo_all_blocks=1 00:08:27.900 --rc geninfo_unexecuted_blocks=1 00:08:27.900 00:08:27.900 ' 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:27.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.900 --rc genhtml_branch_coverage=1 00:08:27.900 --rc genhtml_function_coverage=1 00:08:27.900 --rc genhtml_legend=1 00:08:27.900 --rc geninfo_all_blocks=1 00:08:27.900 --rc geninfo_unexecuted_blocks=1 00:08:27.900 00:08:27.900 ' 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:27.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.900 --rc genhtml_branch_coverage=1 00:08:27.900 --rc genhtml_function_coverage=1 00:08:27.900 --rc genhtml_legend=1 00:08:27.900 --rc geninfo_all_blocks=1 00:08:27.900 --rc geninfo_unexecuted_blocks=1 00:08:27.900 00:08:27.900 ' 00:08:27.900 08:44:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:27.900 08:44:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=711989 00:08:27.900 08:44:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:27.900 08:44:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 711989 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 711989 ']' 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.900 08:44:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:28.159 [2024-11-06 08:44:41.198733] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:28.159 [2024-11-06 08:44:41.198829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid711989 ] 00:08:28.159 [2024-11-06 08:44:41.265867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.159 [2024-11-06 08:44:41.324068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.416 08:44:41 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.416 08:44:41 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:08:28.416 08:44:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:28.674 { 00:08:28.674 "version": "SPDK v25.01-pre git sha1 481542548", 00:08:28.674 "fields": { 00:08:28.674 "major": 25, 00:08:28.674 "minor": 1, 00:08:28.674 "patch": 0, 00:08:28.674 "suffix": "-pre", 00:08:28.674 "commit": "481542548" 00:08:28.674 } 00:08:28.674 } 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:28.674 08:44:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:28.674 08:44:41 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:28.932 request: 00:08:28.932 { 00:08:28.932 "method": "env_dpdk_get_mem_stats", 00:08:28.932 "req_id": 1 00:08:28.932 } 00:08:28.932 Got JSON-RPC error response 00:08:28.932 response: 00:08:28.932 { 00:08:28.932 "code": -32601, 00:08:28.932 "message": "Method not found" 00:08:28.932 } 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.932 08:44:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 711989 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 711989 ']' 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 711989 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 711989 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 711989' 00:08:28.932 killing process with pid 711989 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@969 -- # kill 711989 00:08:28.932 08:44:42 app_cmdline -- common/autotest_common.sh@974 -- # wait 711989 00:08:29.497 00:08:29.497 real 0m1.618s 00:08:29.497 user 0m2.011s 00:08:29.497 sys 0m0.475s 00:08:29.497 08:44:42 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.497 08:44:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:29.497 ************************************ 00:08:29.497 END TEST app_cmdline 00:08:29.497 ************************************ 00:08:29.497 08:44:42 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:29.497 08:44:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.497 08:44:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.497 08:44:42 -- common/autotest_common.sh@10 -- # set +x 00:08:29.497 ************************************ 00:08:29.497 START TEST version 00:08:29.497 ************************************ 00:08:29.497 08:44:42 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:29.497 * Looking for test storage... 00:08:29.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:29.497 08:44:42 version -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:29.497 08:44:42 version -- common/autotest_common.sh@1689 -- # lcov --version 00:08:29.497 08:44:42 version -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:29.755 08:44:42 version -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:29.755 08:44:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.755 08:44:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.755 08:44:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.755 08:44:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.755 08:44:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.755 08:44:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.755 08:44:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.755 08:44:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.755 08:44:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.755 08:44:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.755 08:44:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.755 08:44:42 version -- scripts/common.sh@344 -- # case "$op" in 00:08:29.755 08:44:42 version -- scripts/common.sh@345 -- # : 1 00:08:29.755 08:44:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.755 08:44:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.755 08:44:42 version -- scripts/common.sh@365 -- # decimal 1 00:08:29.755 08:44:42 version -- scripts/common.sh@353 -- # local d=1 00:08:29.755 08:44:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.755 08:44:42 version -- scripts/common.sh@355 -- # echo 1 00:08:29.755 08:44:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.755 08:44:42 version -- scripts/common.sh@366 -- # decimal 2 00:08:29.755 08:44:42 version -- scripts/common.sh@353 -- # local d=2 00:08:29.755 08:44:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.755 08:44:42 version -- scripts/common.sh@355 -- # echo 2 00:08:29.755 08:44:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.755 08:44:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.755 08:44:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.755 08:44:42 version -- scripts/common.sh@368 -- # return 0 00:08:29.755 08:44:42 version -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.755 08:44:42 version -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:29.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.755 --rc genhtml_branch_coverage=1 00:08:29.755 --rc genhtml_function_coverage=1 00:08:29.755 --rc genhtml_legend=1 00:08:29.755 --rc geninfo_all_blocks=1 00:08:29.755 --rc geninfo_unexecuted_blocks=1 00:08:29.755 00:08:29.755 ' 00:08:29.755 08:44:42 version -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:29.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.755 --rc genhtml_branch_coverage=1 00:08:29.755 --rc genhtml_function_coverage=1 00:08:29.755 --rc genhtml_legend=1 00:08:29.755 --rc geninfo_all_blocks=1 00:08:29.755 --rc geninfo_unexecuted_blocks=1 00:08:29.755 00:08:29.755 ' 00:08:29.755 08:44:42 version -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:29.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.755 --rc genhtml_branch_coverage=1 00:08:29.755 --rc genhtml_function_coverage=1 00:08:29.755 --rc genhtml_legend=1 00:08:29.755 --rc geninfo_all_blocks=1 00:08:29.755 --rc geninfo_unexecuted_blocks=1 00:08:29.755 00:08:29.755 ' 00:08:29.755 08:44:42 version -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:29.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.755 --rc genhtml_branch_coverage=1 00:08:29.755 --rc genhtml_function_coverage=1 00:08:29.755 --rc genhtml_legend=1 00:08:29.755 --rc geninfo_all_blocks=1 00:08:29.755 --rc geninfo_unexecuted_blocks=1 00:08:29.755 00:08:29.755 ' 00:08:29.755 08:44:42 version -- app/version.sh@17 -- # get_header_version major 00:08:29.755 08:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:29.755 08:44:42 version -- app/version.sh@14 -- # cut -f2 00:08:29.755 08:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.755 08:44:42 version -- app/version.sh@17 -- # major=25 00:08:29.755 08:44:42 version -- app/version.sh@18 -- # get_header_version minor 00:08:29.755 08:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:29.755 08:44:42 version -- app/version.sh@14 -- # cut -f2 00:08:29.755 08:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.755 08:44:42 version -- app/version.sh@18 -- # minor=1 00:08:29.755 08:44:42 version -- app/version.sh@19 -- # get_header_version patch 00:08:29.755 08:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:29.755 08:44:42 version -- app/version.sh@14 -- # cut -f2 00:08:29.755 08:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.755 08:44:42 version -- app/version.sh@19 -- # patch=0 00:08:29.755 08:44:42 version -- app/version.sh@20 -- # get_header_version suffix 00:08:29.755 08:44:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:29.755 08:44:42 version -- app/version.sh@14 -- # cut -f2 00:08:29.755 08:44:42 version -- app/version.sh@14 -- # tr -d '"' 00:08:29.755 08:44:42 version -- app/version.sh@20 -- # suffix=-pre 00:08:29.755 08:44:42 version -- app/version.sh@22 -- # version=25.1 00:08:29.755 08:44:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:29.756 08:44:42 version -- app/version.sh@28 -- # version=25.1rc0 00:08:29.756 08:44:42 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:29.756 08:44:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:29.756 08:44:42 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:29.756 08:44:42 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:29.756 00:08:29.756 real 0m0.200s 00:08:29.756 user 0m0.139s 00:08:29.756 sys 0m0.087s 00:08:29.756 08:44:42 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.756 08:44:42 version -- common/autotest_common.sh@10 -- # set +x 00:08:29.756 ************************************ 00:08:29.756 END TEST version 00:08:29.756 ************************************ 00:08:29.756 08:44:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:29.756 08:44:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:29.756 08:44:42 -- spdk/autotest.sh@194 -- # uname -s 00:08:29.756 08:44:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:29.756 08:44:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:29.756 08:44:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:29.756 08:44:42 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:29.756 08:44:42 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:29.756 08:44:42 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:29.756 08:44:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:29.756 08:44:42 -- common/autotest_common.sh@10 -- # set +x 00:08:29.756 08:44:42 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:29.756 08:44:42 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:29.756 08:44:42 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:29.756 08:44:42 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:29.756 08:44:42 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:29.756 08:44:42 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:29.756 08:44:42 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:29.756 08:44:42 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:29.756 08:44:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.756 08:44:42 -- common/autotest_common.sh@10 -- # set +x 00:08:29.756 ************************************ 00:08:29.756 START TEST nvmf_tcp 00:08:29.756 ************************************ 00:08:29.756 08:44:42 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:29.756 * Looking for test storage... 00:08:29.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:29.756 08:44:42 nvmf_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:29.756 08:44:42 nvmf_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:08:29.756 08:44:42 nvmf_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:30.018 08:44:43 nvmf_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.018 08:44:43 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:30.018 08:44:43 nvmf_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.018 08:44:43 nvmf_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.018 --rc genhtml_branch_coverage=1 00:08:30.018 --rc genhtml_function_coverage=1 00:08:30.018 --rc genhtml_legend=1 00:08:30.018 --rc geninfo_all_blocks=1 00:08:30.018 --rc geninfo_unexecuted_blocks=1 00:08:30.018 00:08:30.018 ' 00:08:30.018 08:44:43 nvmf_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.018 --rc genhtml_branch_coverage=1 00:08:30.018 --rc genhtml_function_coverage=1 00:08:30.018 --rc genhtml_legend=1 00:08:30.018 --rc geninfo_all_blocks=1 00:08:30.018 --rc geninfo_unexecuted_blocks=1 00:08:30.018 00:08:30.018 ' 00:08:30.018 08:44:43 nvmf_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.018 --rc genhtml_branch_coverage=1 00:08:30.018 --rc genhtml_function_coverage=1 00:08:30.018 --rc genhtml_legend=1 00:08:30.018 --rc geninfo_all_blocks=1 00:08:30.018 --rc geninfo_unexecuted_blocks=1 00:08:30.018 00:08:30.018 ' 00:08:30.018 08:44:43 nvmf_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.018 --rc genhtml_branch_coverage=1 00:08:30.018 --rc genhtml_function_coverage=1 00:08:30.018 --rc genhtml_legend=1 00:08:30.018 --rc geninfo_all_blocks=1 00:08:30.018 --rc geninfo_unexecuted_blocks=1 00:08:30.018 00:08:30.018 ' 00:08:30.018 08:44:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:30.018 08:44:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:30.018 08:44:43 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:30.018 08:44:43 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.018 08:44:43 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.018 08:44:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.018 ************************************ 00:08:30.018 START TEST nvmf_target_core 00:08:30.018 ************************************ 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:30.018 * Looking for test storage... 00:08:30.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1689 -- # lcov --version 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.018 --rc genhtml_branch_coverage=1 00:08:30.018 --rc genhtml_function_coverage=1 00:08:30.018 --rc genhtml_legend=1 00:08:30.018 --rc geninfo_all_blocks=1 00:08:30.018 --rc geninfo_unexecuted_blocks=1 00:08:30.018 00:08:30.018 ' 00:08:30.018 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:30.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.018 --rc genhtml_branch_coverage=1 00:08:30.018 --rc genhtml_function_coverage=1 00:08:30.018 --rc genhtml_legend=1 00:08:30.018 --rc geninfo_all_blocks=1 00:08:30.018 --rc geninfo_unexecuted_blocks=1 00:08:30.019 00:08:30.019 ' 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.019 --rc genhtml_branch_coverage=1 00:08:30.019 --rc genhtml_function_coverage=1 00:08:30.019 --rc genhtml_legend=1 00:08:30.019 --rc geninfo_all_blocks=1 00:08:30.019 --rc geninfo_unexecuted_blocks=1 00:08:30.019 00:08:30.019 ' 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:30.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.019 --rc genhtml_branch_coverage=1 00:08:30.019 --rc genhtml_function_coverage=1 00:08:30.019 --rc genhtml_legend=1 00:08:30.019 --rc geninfo_all_blocks=1 00:08:30.019 --rc geninfo_unexecuted_blocks=1 00:08:30.019 00:08:30.019 ' 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.019 ************************************ 00:08:30.019 START TEST nvmf_abort 00:08:30.019 ************************************ 00:08:30.019 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:30.278 * Looking for test storage... 00:08:30.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # lcov --version 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.278 --rc genhtml_branch_coverage=1 00:08:30.278 --rc genhtml_function_coverage=1 00:08:30.278 --rc genhtml_legend=1 00:08:30.278 --rc geninfo_all_blocks=1 00:08:30.278 --rc geninfo_unexecuted_blocks=1 00:08:30.278 00:08:30.278 ' 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.278 --rc genhtml_branch_coverage=1 00:08:30.278 --rc genhtml_function_coverage=1 00:08:30.278 --rc genhtml_legend=1 00:08:30.278 --rc geninfo_all_blocks=1 00:08:30.278 --rc geninfo_unexecuted_blocks=1 00:08:30.278 00:08:30.278 ' 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.278 --rc genhtml_branch_coverage=1 00:08:30.278 --rc genhtml_function_coverage=1 00:08:30.278 --rc genhtml_legend=1 00:08:30.278 --rc geninfo_all_blocks=1 00:08:30.278 --rc geninfo_unexecuted_blocks=1 00:08:30.278 00:08:30.278 ' 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:30.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.278 --rc genhtml_branch_coverage=1 00:08:30.278 --rc genhtml_function_coverage=1 00:08:30.278 --rc genhtml_legend=1 00:08:30.278 --rc geninfo_all_blocks=1 00:08:30.278 --rc geninfo_unexecuted_blocks=1 00:08:30.278 00:08:30.278 ' 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.278 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.279 08:44:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:32.813 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:32.813 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:32.813 Found net devices under 0000:09:00.0: cvl_0_0 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:32.813 Found net devices under 0000:09:00.1: cvl_0_1 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.813 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:32.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:08:32.814 00:08:32.814 --- 10.0.0.2 ping statistics --- 00:08:32.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.814 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:08:32.814 00:08:32.814 --- 10.0.0.1 ping statistics --- 00:08:32.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.814 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=714079 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 714079 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 714079 ']' 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.814 08:44:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:32.814 [2024-11-06 08:44:45.839902] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:32.814 [2024-11-06 08:44:45.839988] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.814 [2024-11-06 08:44:45.915991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.814 [2024-11-06 08:44:45.979171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.814 [2024-11-06 08:44:45.979236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.814 [2024-11-06 08:44:45.979249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.814 [2024-11-06 08:44:45.979260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.814 [2024-11-06 08:44:45.979269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.814 [2024-11-06 08:44:45.983855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.814 [2024-11-06 08:44:45.983882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.814 [2024-11-06 08:44:45.983886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 [2024-11-06 08:44:46.131967] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 Malloc0 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 Delay0 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 [2024-11-06 08:44:46.206595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.073 08:44:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:33.073 [2024-11-06 08:44:46.280980] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:35.604 Initializing NVMe Controllers 00:08:35.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:35.604 controller IO queue size 128 less than required 00:08:35.604 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:35.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:35.604 Initialization complete. Launching workers. 00:08:35.604 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 29096 00:08:35.604 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29161, failed to submit 62 00:08:35.604 success 29100, unsuccessful 61, failed 0 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:35.604 rmmod nvme_tcp 00:08:35.604 rmmod nvme_fabrics 00:08:35.604 rmmod nvme_keyring 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:35.604 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 714079 ']' 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 714079 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 714079 ']' 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 714079 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 714079 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 714079' 00:08:35.605 killing process with pid 714079 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 714079 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 714079 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.605 08:44:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.515 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:37.515 00:08:37.515 real 0m7.453s 00:08:37.515 user 0m10.570s 00:08:37.515 sys 0m2.608s 00:08:37.515 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.515 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:37.515 ************************************ 00:08:37.515 END TEST nvmf_abort 00:08:37.515 ************************************ 00:08:37.515 08:44:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:37.515 08:44:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.515 08:44:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.515 08:44:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.774 ************************************ 00:08:37.774 START TEST nvmf_ns_hotplug_stress 00:08:37.774 ************************************ 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:37.774 * Looking for test storage... 00:08:37.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:08:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.774 --rc genhtml_branch_coverage=1 00:08:37.774 --rc genhtml_function_coverage=1 00:08:37.774 --rc genhtml_legend=1 00:08:37.774 --rc geninfo_all_blocks=1 00:08:37.774 --rc geninfo_unexecuted_blocks=1 00:08:37.774 00:08:37.774 ' 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:08:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.774 --rc genhtml_branch_coverage=1 00:08:37.774 --rc genhtml_function_coverage=1 00:08:37.774 --rc genhtml_legend=1 00:08:37.774 --rc geninfo_all_blocks=1 00:08:37.774 --rc geninfo_unexecuted_blocks=1 00:08:37.774 00:08:37.774 ' 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:08:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.774 --rc genhtml_branch_coverage=1 00:08:37.774 --rc genhtml_function_coverage=1 00:08:37.774 --rc genhtml_legend=1 00:08:37.774 --rc geninfo_all_blocks=1 00:08:37.774 --rc geninfo_unexecuted_blocks=1 00:08:37.774 00:08:37.774 ' 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:08:37.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.774 --rc genhtml_branch_coverage=1 00:08:37.774 --rc genhtml_function_coverage=1 00:08:37.774 --rc genhtml_legend=1 00:08:37.774 --rc geninfo_all_blocks=1 00:08:37.774 --rc geninfo_unexecuted_blocks=1 00:08:37.774 00:08:37.774 ' 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:37.774 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:37.775 08:44:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:40.303 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:40.303 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:40.303 Found net devices under 0000:09:00.0: cvl_0_0 00:08:40.303 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:40.304 Found net devices under 0000:09:00.1: cvl_0_1 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:08:40.304 00:08:40.304 --- 10.0.0.2 ping statistics --- 00:08:40.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.304 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:08:40.304 00:08:40.304 --- 10.0.0.1 ping statistics --- 00:08:40.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.304 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=716441 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 716441 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 716441 ']' 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.304 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.304 [2024-11-06 08:44:53.397777] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:08:40.304 [2024-11-06 08:44:53.397851] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.304 [2024-11-06 08:44:53.464937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:40.304 [2024-11-06 08:44:53.518989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.304 [2024-11-06 08:44:53.519042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.304 [2024-11-06 08:44:53.519062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.304 [2024-11-06 08:44:53.519073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.304 [2024-11-06 08:44:53.519082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.304 [2024-11-06 08:44:53.520518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.304 [2024-11-06 08:44:53.520581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.304 [2024-11-06 08:44:53.520584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.562 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.562 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:08:40.562 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:40.562 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.562 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:40.562 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.562 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:40.562 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:40.820 [2024-11-06 08:44:53.898872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.820 08:44:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:41.077 08:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:41.334 [2024-11-06 08:44:54.445464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.334 08:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.592 08:44:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:41.850 Malloc0 00:08:41.850 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:42.108 Delay0 00:08:42.108 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:42.365 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:42.623 NULL1 00:08:42.623 08:44:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:42.881 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=716746 00:08:42.881 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:42.881 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:42.881 08:44:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.253 Read completed with error (sct=0, sc=11) 00:08:44.253 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.253 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.253 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:44.253 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:44.511 true 00:08:44.511 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:44.511 08:44:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.443 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.701 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:45.701 08:44:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:45.959 true 00:08:45.959 08:44:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:45.959 08:44:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.216 08:44:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.474 08:44:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:46.474 08:44:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:46.732 true 00:08:46.732 08:44:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:46.732 08:44:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.989 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.247 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:47.247 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:47.504 true 00:08:47.504 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:47.504 08:45:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.463 08:45:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:48.784 08:45:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:48.784 08:45:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:49.092 true 00:08:49.092 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:49.092 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.371 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.629 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:49.629 08:45:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:49.887 true 00:08:49.887 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:49.887 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.144 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.402 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:50.402 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:50.660 true 00:08:50.660 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:50.660 08:45:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.593 08:45:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.851 08:45:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:51.851 08:45:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:52.109 true 00:08:52.109 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:52.109 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.366 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.624 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:52.624 08:45:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:52.881 true 00:08:52.881 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:52.881 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.140 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.398 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:53.398 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:53.655 true 00:08:53.655 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:53.656 08:45:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:54.589 08:45:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.847 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:54.847 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:55.105 true 00:08:55.105 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:55.105 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.363 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.930 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:55.930 08:45:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:55.930 true 00:08:56.188 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:56.188 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.446 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.703 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:56.704 08:45:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:56.962 true 00:08:56.962 08:45:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:56.962 08:45:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.528 08:45:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.094 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:58.095 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:58.095 true 00:08:58.095 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:58.095 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.353 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.610 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:58.610 08:45:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:58.868 true 00:08:58.868 08:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:58.868 08:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.127 08:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.693 08:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:59.693 08:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:59.693 true 00:08:59.693 08:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:08:59.693 08:45:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.627 08:45:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.885 08:45:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:00.885 08:45:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:01.143 true 00:09:01.143 08:45:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:01.143 08:45:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.709 08:45:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.709 08:45:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:01.709 08:45:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:01.967 true 00:09:01.967 08:45:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:01.967 08:45:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.900 08:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.900 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.158 08:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:03.158 08:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:03.415 true 00:09:03.415 08:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:03.415 08:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.672 08:45:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.930 08:45:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:03.930 08:45:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:04.187 true 00:09:04.187 08:45:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:04.187 08:45:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.118 08:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.376 08:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:05.376 08:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:05.633 true 00:09:05.633 08:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:05.633 08:45:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.890 08:45:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.148 08:45:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:06.148 08:45:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:06.405 true 00:09:06.405 08:45:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:06.405 08:45:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.336 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.336 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.594 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:07.594 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:07.851 true 00:09:07.851 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:07.851 08:45:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.108 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.366 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:08.366 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:08.623 true 00:09:08.624 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:08.624 08:45:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:09.557 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:09.815 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:09.815 08:45:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:10.072 true 00:09:10.072 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:10.072 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.330 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.587 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:10.587 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:10.845 true 00:09:10.845 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:10.845 08:45:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.102 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.358 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:11.358 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:11.616 true 00:09:11.616 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:11.616 08:45:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.550 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.550 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:12.807 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:12.807 08:45:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:13.065 true 00:09:13.065 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:13.065 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.065 Initializing NVMe Controllers 00:09:13.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:13.065 Controller IO queue size 128, less than required. 00:09:13.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:13.065 Controller IO queue size 128, less than required. 00:09:13.065 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:13.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:13.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:13.065 Initialization complete. Launching workers. 00:09:13.065 ======================================================== 00:09:13.065 Latency(us) 00:09:13.065 Device Information : IOPS MiB/s Average min max 00:09:13.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 561.00 0.27 101524.86 3451.68 1012494.95 00:09:13.065 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9092.68 4.44 14078.42 3359.39 534151.98 00:09:13.065 ======================================================== 00:09:13.065 Total : 9653.68 4.71 19160.17 3359.39 1012494.95 00:09:13.065 00:09:13.322 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.580 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:13.581 08:45:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:13.840 true 00:09:13.840 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 716746 00:09:13.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (716746) - No such process 00:09:13.840 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 716746 00:09:13.840 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.098 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:14.357 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:14.357 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:14.357 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:14.357 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.357 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:14.615 null0 00:09:14.615 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.615 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.615 08:45:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:14.873 null1 00:09:14.873 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.873 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.873 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:15.131 null2 00:09:15.131 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.131 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.131 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:15.389 null3 00:09:15.389 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.389 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.389 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:15.648 null4 00:09:15.648 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.648 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.648 08:45:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:15.905 null5 00:09:15.905 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.905 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.905 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:16.164 null6 00:09:16.164 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.164 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.164 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:16.422 null7 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 721453 721455 721457 721460 721462 721464 721467 721469 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.422 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.989 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.989 08:45:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.989 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.989 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.989 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.989 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.989 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.989 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.247 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.247 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.247 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.247 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.247 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.247 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.247 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.247 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.248 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.506 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.506 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.506 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.506 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.506 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.506 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.506 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.506 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.764 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.765 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.765 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.765 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.765 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.765 08:45:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.022 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.022 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.022 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.022 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.022 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.023 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.023 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.023 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.281 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.539 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.539 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.539 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.539 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.539 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.539 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.798 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.798 08:45:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.056 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.056 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.056 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.056 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.057 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.315 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.315 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.315 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.315 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.315 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.315 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.315 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.315 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.573 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.573 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.573 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.573 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.573 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.573 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.573 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.573 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.574 08:45:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.832 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.832 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.832 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.832 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.832 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.832 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.832 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.832 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.090 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.090 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.090 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.090 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.090 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.090 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.091 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.658 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.916 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.917 08:45:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.175 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.175 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.175 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.175 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.175 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.175 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.175 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.175 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.433 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.433 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.433 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.433 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.433 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.433 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.433 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.433 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.434 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:21.692 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:21.692 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:21.692 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.692 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:21.692 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:21.692 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:21.692 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.692 08:45:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:21.950 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.951 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.951 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:21.951 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.951 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.951 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:21.951 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.951 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.951 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:22.209 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:22.209 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:22.209 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.209 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:22.209 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:22.209 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.209 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:22.467 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:22.724 rmmod nvme_tcp 00:09:22.724 rmmod nvme_fabrics 00:09:22.724 rmmod nvme_keyring 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 716441 ']' 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 716441 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 716441 ']' 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 716441 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 716441 00:09:22.724 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:22.725 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:22.725 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 716441' 00:09:22.725 killing process with pid 716441 00:09:22.725 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 716441 00:09:22.725 08:45:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 716441 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.984 08:45:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.526 00:09:25.526 real 0m47.397s 00:09:25.526 user 3m39.351s 00:09:25.526 sys 0m16.518s 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:25.526 ************************************ 00:09:25.526 END TEST nvmf_ns_hotplug_stress 00:09:25.526 ************************************ 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.526 ************************************ 00:09:25.526 START TEST nvmf_delete_subsystem 00:09:25.526 ************************************ 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:25.526 * Looking for test storage... 00:09:25.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lcov --version 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:25.526 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:25.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.527 --rc genhtml_branch_coverage=1 00:09:25.527 --rc genhtml_function_coverage=1 00:09:25.527 --rc genhtml_legend=1 00:09:25.527 --rc geninfo_all_blocks=1 00:09:25.527 --rc geninfo_unexecuted_blocks=1 00:09:25.527 00:09:25.527 ' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:25.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.527 --rc genhtml_branch_coverage=1 00:09:25.527 --rc genhtml_function_coverage=1 00:09:25.527 --rc genhtml_legend=1 00:09:25.527 --rc geninfo_all_blocks=1 00:09:25.527 --rc geninfo_unexecuted_blocks=1 00:09:25.527 00:09:25.527 ' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:25.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.527 --rc genhtml_branch_coverage=1 00:09:25.527 --rc genhtml_function_coverage=1 00:09:25.527 --rc genhtml_legend=1 00:09:25.527 --rc geninfo_all_blocks=1 00:09:25.527 --rc geninfo_unexecuted_blocks=1 00:09:25.527 00:09:25.527 ' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:25.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.527 --rc genhtml_branch_coverage=1 00:09:25.527 --rc genhtml_function_coverage=1 00:09:25.527 --rc genhtml_legend=1 00:09:25.527 --rc geninfo_all_blocks=1 00:09:25.527 --rc geninfo_unexecuted_blocks=1 00:09:25.527 00:09:25.527 ' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.527 08:45:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:27.429 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:27.430 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:27.430 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:27.430 Found net devices under 0000:09:00.0: cvl_0_0 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:27.430 Found net devices under 0000:09:00.1: cvl_0_1 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:27.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:09:27.430 00:09:27.430 --- 10.0.0.2 ping statistics --- 00:09:27.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.430 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:09:27.430 00:09:27.430 --- 10.0.0.1 ping statistics --- 00:09:27.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.430 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:27.430 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=724340 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 724340 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 724340 ']' 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.689 08:45:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.689 [2024-11-06 08:45:40.783584] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:09:27.689 [2024-11-06 08:45:40.783650] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.689 [2024-11-06 08:45:40.850071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.689 [2024-11-06 08:45:40.903336] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.689 [2024-11-06 08:45:40.903409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.689 [2024-11-06 08:45:40.903432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.689 [2024-11-06 08:45:40.903442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.689 [2024-11-06 08:45:40.903452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.689 [2024-11-06 08:45:40.904866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.689 [2024-11-06 08:45:40.904872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.948 [2024-11-06 08:45:41.043557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.948 [2024-11-06 08:45:41.059754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.948 NULL1 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.948 Delay0 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=724479 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:27.948 08:45:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:27.948 [2024-11-06 08:45:41.144556] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:29.852 08:45:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.852 08:45:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.852 08:45:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 [2024-11-06 08:45:43.266957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f767c00cfe0 is same with the state(6) to be set 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 starting I/O failed: -6 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Write completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.110 Read completed with error (sct=0, sc=8) 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 starting I/O failed: -6 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 starting I/O failed: -6 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 starting I/O failed: -6 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 [2024-11-06 08:45:43.267856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5d680 is same with the state(6) to be set 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 Write completed with error (sct=0, sc=8) 00:09:30.111 Read completed with error (sct=0, sc=8) 00:09:31.044 [2024-11-06 08:45:44.238563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e9a0 is same with the state(6) to be set 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 [2024-11-06 08:45:44.265923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f767c00d310 is same with the state(6) to be set 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 [2024-11-06 08:45:44.269025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5d860 is same with the state(6) to be set 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 [2024-11-06 08:45:44.269977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5d2c0 is same with the state(6) to be set 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Write completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.044 Read completed with error (sct=0, sc=8) 00:09:31.045 Read completed with error (sct=0, sc=8) 00:09:31.045 Read completed with error (sct=0, sc=8) 00:09:31.045 Write completed with error (sct=0, sc=8) 00:09:31.045 Read completed with error (sct=0, sc=8) 00:09:31.045 Read completed with error (sct=0, sc=8) 00:09:31.045 Write completed with error (sct=0, sc=8) 00:09:31.045 [2024-11-06 08:45:44.270208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5d4a0 is same with the state(6) to be set 00:09:31.045 Initializing NVMe Controllers 00:09:31.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:31.045 Controller IO queue size 128, less than required. 00:09:31.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:31.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:31.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:31.045 Initialization complete. Launching workers. 00:09:31.045 ======================================================== 00:09:31.045 Latency(us) 00:09:31.045 Device Information : IOPS MiB/s Average min max 00:09:31.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 184.06 0.09 994453.64 2165.84 2003079.06 00:09:31.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.36 0.07 915020.44 575.31 1012682.53 00:09:31.045 ======================================================== 00:09:31.045 Total : 330.42 0.16 959269.26 575.31 2003079.06 00:09:31.045 00:09:31.045 [2024-11-06 08:45:44.271027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e9a0 (9): Bad file descriptor 00:09:31.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:31.045 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.045 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:31.045 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 724479 00:09:31.045 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:31.610 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 724479 00:09:31.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (724479) - No such process 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 724479 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 724479 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 724479 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.611 [2024-11-06 08:45:44.795930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=724893 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724893 00:09:31.611 08:45:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.611 [2024-11-06 08:45:44.868713] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:32.177 08:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.177 08:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724893 00:09:32.177 08:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.742 08:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.742 08:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724893 00:09:32.742 08:45:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.307 08:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.307 08:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724893 00:09:33.307 08:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.564 08:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.565 08:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724893 00:09:33.565 08:45:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.130 08:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:34.130 08:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724893 00:09:34.130 08:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.696 08:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:34.696 08:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724893 00:09:34.696 08:45:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:34.953 Initializing NVMe Controllers 00:09:34.953 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:34.953 Controller IO queue size 128, less than required. 00:09:34.953 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:34.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:34.953 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:34.953 Initialization complete. Launching workers. 00:09:34.953 ======================================================== 00:09:34.953 Latency(us) 00:09:34.953 Device Information : IOPS MiB/s Average min max 00:09:34.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004116.78 1000163.08 1011891.83 00:09:34.953 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004733.46 1000185.28 1041938.65 00:09:34.953 ======================================================== 00:09:34.953 Total : 256.00 0.12 1004425.12 1000163.08 1041938.65 00:09:34.953 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 724893 00:09:35.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (724893) - No such process 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 724893 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:35.211 rmmod nvme_tcp 00:09:35.211 rmmod nvme_fabrics 00:09:35.211 rmmod nvme_keyring 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 724340 ']' 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 724340 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 724340 ']' 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 724340 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 724340 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 724340' 00:09:35.211 killing process with pid 724340 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 724340 00:09:35.211 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 724340 00:09:35.471 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:35.471 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:35.471 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:35.471 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:35.471 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:09:35.471 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:35.471 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:09:35.472 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:35.472 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:35.472 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.472 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.472 08:45:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:38.009 00:09:38.009 real 0m12.452s 00:09:38.009 user 0m27.979s 00:09:38.009 sys 0m2.970s 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.009 ************************************ 00:09:38.009 END TEST nvmf_delete_subsystem 00:09:38.009 ************************************ 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.009 ************************************ 00:09:38.009 START TEST nvmf_host_management 00:09:38.009 ************************************ 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:38.009 * Looking for test storage... 00:09:38.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # lcov --version 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:38.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.009 --rc genhtml_branch_coverage=1 00:09:38.009 --rc genhtml_function_coverage=1 00:09:38.009 --rc genhtml_legend=1 00:09:38.009 --rc geninfo_all_blocks=1 00:09:38.009 --rc geninfo_unexecuted_blocks=1 00:09:38.009 00:09:38.009 ' 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:38.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.009 --rc genhtml_branch_coverage=1 00:09:38.009 --rc genhtml_function_coverage=1 00:09:38.009 --rc genhtml_legend=1 00:09:38.009 --rc geninfo_all_blocks=1 00:09:38.009 --rc geninfo_unexecuted_blocks=1 00:09:38.009 00:09:38.009 ' 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:38.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.009 --rc genhtml_branch_coverage=1 00:09:38.009 --rc genhtml_function_coverage=1 00:09:38.009 --rc genhtml_legend=1 00:09:38.009 --rc geninfo_all_blocks=1 00:09:38.009 --rc geninfo_unexecuted_blocks=1 00:09:38.009 00:09:38.009 ' 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:38.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.009 --rc genhtml_branch_coverage=1 00:09:38.009 --rc genhtml_function_coverage=1 00:09:38.009 --rc genhtml_legend=1 00:09:38.009 --rc geninfo_all_blocks=1 00:09:38.009 --rc geninfo_unexecuted_blocks=1 00:09:38.009 00:09:38.009 ' 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.009 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:38.010 08:45:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:39.915 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:39.915 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:39.915 Found net devices under 0000:09:00.0: cvl_0_0 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:39.915 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:39.916 Found net devices under 0000:09:00.1: cvl_0_1 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:39.916 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:40.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:09:40.175 00:09:40.175 --- 10.0.0.2 ping statistics --- 00:09:40.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.175 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:09:40.175 00:09:40.175 --- 10.0.0.1 ping statistics --- 00:09:40.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.175 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=727248 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 727248 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 727248 ']' 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.175 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.175 [2024-11-06 08:45:53.314701] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:09:40.175 [2024-11-06 08:45:53.314807] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.175 [2024-11-06 08:45:53.387283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.175 [2024-11-06 08:45:53.441941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.175 [2024-11-06 08:45:53.441996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.175 [2024-11-06 08:45:53.442020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.175 [2024-11-06 08:45:53.442031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.175 [2024-11-06 08:45:53.442041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.175 [2024-11-06 08:45:53.443612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.175 [2024-11-06 08:45:53.443717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.175 [2024-11-06 08:45:53.443825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:40.175 [2024-11-06 08:45:53.443878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.434 [2024-11-06 08:45:53.590875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:40.434 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.435 Malloc0 00:09:40.435 [2024-11-06 08:45:53.659294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=727408 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 727408 /var/tmp/bdevperf.sock 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 727408 ']' 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:40.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:40.435 { 00:09:40.435 "params": { 00:09:40.435 "name": "Nvme$subsystem", 00:09:40.435 "trtype": "$TEST_TRANSPORT", 00:09:40.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.435 "adrfam": "ipv4", 00:09:40.435 "trsvcid": "$NVMF_PORT", 00:09:40.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.435 "hdgst": ${hdgst:-false}, 00:09:40.435 "ddgst": ${ddgst:-false} 00:09:40.435 }, 00:09:40.435 "method": "bdev_nvme_attach_controller" 00:09:40.435 } 00:09:40.435 EOF 00:09:40.435 )") 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:09:40.435 08:45:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:40.435 "params": { 00:09:40.435 "name": "Nvme0", 00:09:40.435 "trtype": "tcp", 00:09:40.435 "traddr": "10.0.0.2", 00:09:40.435 "adrfam": "ipv4", 00:09:40.435 "trsvcid": "4420", 00:09:40.435 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:40.435 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:40.435 "hdgst": false, 00:09:40.435 "ddgst": false 00:09:40.435 }, 00:09:40.435 "method": "bdev_nvme_attach_controller" 00:09:40.435 }' 00:09:40.693 [2024-11-06 08:45:53.742474] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:09:40.693 [2024-11-06 08:45:53.742563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727408 ] 00:09:40.693 [2024-11-06 08:45:53.812942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.693 [2024-11-06 08:45:53.873438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.951 Running I/O for 10 seconds... 00:09:40.951 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.951 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:09:40.951 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:40.951 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:40.952 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.212 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:41.212 [2024-11-06 08:45:54.496353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:41.212 [2024-11-06 08:45:54.496419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.212 [2024-11-06 08:45:54.496442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:41.212 [2024-11-06 08:45:54.496456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.212 [2024-11-06 08:45:54.496470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:41.212 [2024-11-06 08:45:54.496485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.212 [2024-11-06 08:45:54.496500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:41.212 [2024-11-06 08:45:54.496514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.212 [2024-11-06 08:45:54.496527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d38a40 is same with the state(6) to be set 00:09:41.212 [2024-11-06 08:45:54.496902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.212 [2024-11-06 08:45:54.496928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.212 [2024-11-06 08:45:54.496953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.212 [2024-11-06 08:45:54.496969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.212 [2024-11-06 08:45:54.496985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.212 [2024-11-06 08:45:54.496999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.212 [2024-11-06 08:45:54.497015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.212 [2024-11-06 08:45:54.497030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.212 [2024-11-06 08:45:54.497046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.212 [2024-11-06 08:45:54.497061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.497966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.497980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.213 [2024-11-06 08:45:54.498300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.213 [2024-11-06 08:45:54.498315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.498974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.498990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:41.214 [2024-11-06 08:45:54.499005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:41.214 [2024-11-06 08:45:54.500245] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:41.472 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:41.472 00:09:41.472 Latency(us) 00:09:41.472 [2024-11-06T07:45:54.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.472 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:41.472 Job: Nvme0n1 ended in about 0.40 seconds with error 00:09:41.472 Verification LBA range: start 0x0 length 0x400 00:09:41.472 Nvme0n1 : 0.40 1585.47 99.09 158.55 0.00 35646.28 3058.35 34175.81 00:09:41.472 [2024-11-06T07:45:54.761Z] =================================================================================================================== 00:09:41.472 [2024-11-06T07:45:54.761Z] Total : 1585.47 99.09 158.55 0.00 35646.28 3058.35 34175.81 00:09:41.472 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.472 [2024-11-06 08:45:54.502173] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:41.472 [2024-11-06 08:45:54.502210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d38a40 (9): Bad file descriptor 00:09:41.472 08:45:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:41.472 [2024-11-06 08:45:54.604011] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 727408 00:09:42.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (727408) - No such process 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:42.407 { 00:09:42.407 "params": { 00:09:42.407 "name": "Nvme$subsystem", 00:09:42.407 "trtype": "$TEST_TRANSPORT", 00:09:42.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.407 "adrfam": "ipv4", 00:09:42.407 "trsvcid": "$NVMF_PORT", 00:09:42.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.407 "hdgst": ${hdgst:-false}, 00:09:42.407 "ddgst": ${ddgst:-false} 00:09:42.407 }, 00:09:42.407 "method": "bdev_nvme_attach_controller" 00:09:42.407 } 00:09:42.407 EOF 00:09:42.407 )") 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:09:42.407 08:45:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:42.407 "params": { 00:09:42.407 "name": "Nvme0", 00:09:42.407 "trtype": "tcp", 00:09:42.407 "traddr": "10.0.0.2", 00:09:42.407 "adrfam": "ipv4", 00:09:42.407 "trsvcid": "4420", 00:09:42.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:42.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:42.407 "hdgst": false, 00:09:42.407 "ddgst": false 00:09:42.407 }, 00:09:42.407 "method": "bdev_nvme_attach_controller" 00:09:42.407 }' 00:09:42.407 [2024-11-06 08:45:55.555973] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:09:42.407 [2024-11-06 08:45:55.556051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid727573 ] 00:09:42.407 [2024-11-06 08:45:55.627929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.407 [2024-11-06 08:45:55.688170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.665 Running I/O for 1 seconds... 00:09:43.858 1664.00 IOPS, 104.00 MiB/s 00:09:43.858 Latency(us) 00:09:43.858 [2024-11-06T07:45:57.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.858 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:43.858 Verification LBA range: start 0x0 length 0x400 00:09:43.858 Nvme0n1 : 1.02 1696.21 106.01 0.00 0.00 37111.75 6699.24 33399.09 00:09:43.858 [2024-11-06T07:45:57.147Z] =================================================================================================================== 00:09:43.858 [2024-11-06T07:45:57.147Z] Total : 1696.21 106.01 0.00 0.00 37111.75 6699.24 33399.09 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.858 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.858 rmmod nvme_tcp 00:09:43.858 rmmod nvme_fabrics 00:09:44.119 rmmod nvme_keyring 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 727248 ']' 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 727248 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 727248 ']' 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 727248 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 727248 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 727248' 00:09:44.119 killing process with pid 727248 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 727248 00:09:44.119 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 727248 00:09:44.410 [2024-11-06 08:45:57.431368] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.410 08:45:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:46.387 00:09:46.387 real 0m8.743s 00:09:46.387 user 0m19.236s 00:09:46.387 sys 0m2.718s 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:46.387 ************************************ 00:09:46.387 END TEST nvmf_host_management 00:09:46.387 ************************************ 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.387 ************************************ 00:09:46.387 START TEST nvmf_lvol 00:09:46.387 ************************************ 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:46.387 * Looking for test storage... 00:09:46.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # lcov --version 00:09:46.387 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:09:46.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.647 --rc genhtml_branch_coverage=1 00:09:46.647 --rc genhtml_function_coverage=1 00:09:46.647 --rc genhtml_legend=1 00:09:46.647 --rc geninfo_all_blocks=1 00:09:46.647 --rc geninfo_unexecuted_blocks=1 00:09:46.647 00:09:46.647 ' 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:09:46.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.647 --rc genhtml_branch_coverage=1 00:09:46.647 --rc genhtml_function_coverage=1 00:09:46.647 --rc genhtml_legend=1 00:09:46.647 --rc geninfo_all_blocks=1 00:09:46.647 --rc geninfo_unexecuted_blocks=1 00:09:46.647 00:09:46.647 ' 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:09:46.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.647 --rc genhtml_branch_coverage=1 00:09:46.647 --rc genhtml_function_coverage=1 00:09:46.647 --rc genhtml_legend=1 00:09:46.647 --rc geninfo_all_blocks=1 00:09:46.647 --rc geninfo_unexecuted_blocks=1 00:09:46.647 00:09:46.647 ' 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:09:46.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.647 --rc genhtml_branch_coverage=1 00:09:46.647 --rc genhtml_function_coverage=1 00:09:46.647 --rc genhtml_legend=1 00:09:46.647 --rc geninfo_all_blocks=1 00:09:46.647 --rc geninfo_unexecuted_blocks=1 00:09:46.647 00:09:46.647 ' 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:46.647 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.648 08:45:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:49.180 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:49.181 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:49.181 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:49.181 Found net devices under 0000:09:00.0: cvl_0_0 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:49.181 Found net devices under 0000:09:00.1: cvl_0_1 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.181 08:46:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:49.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:09:49.181 00:09:49.181 --- 10.0.0.2 ping statistics --- 00:09:49.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.181 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:09:49.181 00:09:49.181 --- 10.0.0.1 ping statistics --- 00:09:49.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.181 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=729797 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 729797 00:09:49.181 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 729797 ']' 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:49.182 [2024-11-06 08:46:02.161945] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:09:49.182 [2024-11-06 08:46:02.162014] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.182 [2024-11-06 08:46:02.227597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.182 [2024-11-06 08:46:02.281375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.182 [2024-11-06 08:46:02.281428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.182 [2024-11-06 08:46:02.281451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.182 [2024-11-06 08:46:02.281462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.182 [2024-11-06 08:46:02.281471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.182 [2024-11-06 08:46:02.282891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.182 [2024-11-06 08:46:02.282954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.182 [2024-11-06 08:46:02.282958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.182 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:49.440 [2024-11-06 08:46:02.676372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.440 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.005 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:50.005 08:46:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:50.005 08:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:50.005 08:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:50.263 08:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:50.828 08:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=da5d45f6-9f54-45fe-89d4-838404d5c329 00:09:50.828 08:46:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u da5d45f6-9f54-45fe-89d4-838404d5c329 lvol 20 00:09:51.085 08:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d27d8333-721f-4cbb-baf7-c1ef80f7d4c9 00:09:51.085 08:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:51.343 08:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d27d8333-721f-4cbb-baf7-c1ef80f7d4c9 00:09:51.600 08:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:51.857 [2024-11-06 08:46:04.918081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.858 08:46:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.115 08:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=730223 00:09:52.115 08:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:52.115 08:46:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:53.047 08:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d27d8333-721f-4cbb-baf7-c1ef80f7d4c9 MY_SNAPSHOT 00:09:53.304 08:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=55080a7c-5a9f-4f49-a683-e3c20eb86c68 00:09:53.305 08:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d27d8333-721f-4cbb-baf7-c1ef80f7d4c9 30 00:09:53.870 08:46:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 55080a7c-5a9f-4f49-a683-e3c20eb86c68 MY_CLONE 00:09:54.127 08:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b1cb3762-d662-4ee5-8ebf-3ef90b19ff74 00:09:54.127 08:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b1cb3762-d662-4ee5-8ebf-3ef90b19ff74 00:09:54.693 08:46:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 730223 00:10:02.797 Initializing NVMe Controllers 00:10:02.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:02.797 Controller IO queue size 128, less than required. 00:10:02.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:02.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:02.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:02.797 Initialization complete. Launching workers. 00:10:02.797 ======================================================== 00:10:02.797 Latency(us) 00:10:02.797 Device Information : IOPS MiB/s Average min max 00:10:02.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10447.90 40.81 12254.22 2014.96 80699.13 00:10:02.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10411.20 40.67 12301.43 2203.38 62769.72 00:10:02.797 ======================================================== 00:10:02.798 Total : 20859.10 81.48 12277.79 2014.96 80699.13 00:10:02.798 00:10:02.798 08:46:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:02.798 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d27d8333-721f-4cbb-baf7-c1ef80f7d4c9 00:10:03.055 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da5d45f6-9f54-45fe-89d4-838404d5c329 00:10:03.313 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:03.313 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:03.313 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:03.313 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:03.313 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:03.313 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.313 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:03.313 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.313 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.313 rmmod nvme_tcp 00:10:03.571 rmmod nvme_fabrics 00:10:03.571 rmmod nvme_keyring 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 729797 ']' 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 729797 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 729797 ']' 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 729797 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 729797 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 729797' 00:10:03.571 killing process with pid 729797 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 729797 00:10:03.571 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 729797 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:03.830 08:46:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.737 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.996 00:10:05.996 real 0m19.473s 00:10:05.996 user 1m5.598s 00:10:05.996 sys 0m5.798s 00:10:05.996 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.996 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:05.996 ************************************ 00:10:05.996 END TEST nvmf_lvol 00:10:05.996 ************************************ 00:10:05.996 08:46:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:05.996 08:46:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:05.996 08:46:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.996 08:46:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.996 ************************************ 00:10:05.996 START TEST nvmf_lvs_grow 00:10:05.996 ************************************ 00:10:05.996 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:05.996 * Looking for test storage... 00:10:05.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.996 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:05.996 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lcov --version 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:05.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.997 --rc genhtml_branch_coverage=1 00:10:05.997 --rc genhtml_function_coverage=1 00:10:05.997 --rc genhtml_legend=1 00:10:05.997 --rc geninfo_all_blocks=1 00:10:05.997 --rc geninfo_unexecuted_blocks=1 00:10:05.997 00:10:05.997 ' 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:05.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.997 --rc genhtml_branch_coverage=1 00:10:05.997 --rc genhtml_function_coverage=1 00:10:05.997 --rc genhtml_legend=1 00:10:05.997 --rc geninfo_all_blocks=1 00:10:05.997 --rc geninfo_unexecuted_blocks=1 00:10:05.997 00:10:05.997 ' 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:05.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.997 --rc genhtml_branch_coverage=1 00:10:05.997 --rc genhtml_function_coverage=1 00:10:05.997 --rc genhtml_legend=1 00:10:05.997 --rc geninfo_all_blocks=1 00:10:05.997 --rc geninfo_unexecuted_blocks=1 00:10:05.997 00:10:05.997 ' 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:05.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.997 --rc genhtml_branch_coverage=1 00:10:05.997 --rc genhtml_function_coverage=1 00:10:05.997 --rc genhtml_legend=1 00:10:05.997 --rc geninfo_all_blocks=1 00:10:05.997 --rc geninfo_unexecuted_blocks=1 00:10:05.997 00:10:05.997 ' 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.997 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.998 08:46:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:08.536 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:08.536 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.536 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:08.537 Found net devices under 0000:09:00.0: cvl_0_0 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:08.537 Found net devices under 0000:09:00.1: cvl_0_1 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:10:08.537 00:10:08.537 --- 10.0.0.2 ping statistics --- 00:10:08.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.537 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:10:08.537 00:10:08.537 --- 10.0.0.1 ping statistics --- 00:10:08.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.537 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=733512 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 733512 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 733512 ']' 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.537 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:08.537 [2024-11-06 08:46:21.592601] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:08.537 [2024-11-06 08:46:21.592683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.537 [2024-11-06 08:46:21.664128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.537 [2024-11-06 08:46:21.723065] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.537 [2024-11-06 08:46:21.723116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.537 [2024-11-06 08:46:21.723130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.537 [2024-11-06 08:46:21.723142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.537 [2024-11-06 08:46:21.723152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.537 [2024-11-06 08:46:21.723743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.795 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.795 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:08.795 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:08.795 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:08.795 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:08.795 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.795 08:46:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:09.053 [2024-11-06 08:46:22.128904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:09.053 ************************************ 00:10:09.053 START TEST lvs_grow_clean 00:10:09.053 ************************************ 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:09.053 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:09.054 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:09.311 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:09.311 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:09.569 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:09.569 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:09.569 08:46:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:09.827 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:09.827 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:09.827 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 lvol 150 00:10:10.086 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1296618e-c8b6-471f-ada2-1c574d39baca 00:10:10.086 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:10.086 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:10.344 [2024-11-06 08:46:23.542210] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:10.344 [2024-11-06 08:46:23.542286] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:10.344 true 00:10:10.344 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:10.344 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:10.602 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:10.602 08:46:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:10.860 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1296618e-c8b6-471f-ada2-1c574d39baca 00:10:11.119 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:11.377 [2024-11-06 08:46:24.621477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.377 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=733952 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 733952 /var/tmp/bdevperf.sock 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 733952 ']' 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:11.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.636 08:46:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:11.894 [2024-11-06 08:46:24.950548] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:11.894 [2024-11-06 08:46:24.950635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid733952 ] 00:10:11.894 [2024-11-06 08:46:25.016420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.894 [2024-11-06 08:46:25.076396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.152 08:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.152 08:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:12.152 08:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:12.410 Nvme0n1 00:10:12.668 08:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:12.668 [ 00:10:12.668 { 00:10:12.668 "name": "Nvme0n1", 00:10:12.668 "aliases": [ 00:10:12.668 "1296618e-c8b6-471f-ada2-1c574d39baca" 00:10:12.668 ], 00:10:12.668 "product_name": "NVMe disk", 00:10:12.668 "block_size": 4096, 00:10:12.668 "num_blocks": 38912, 00:10:12.668 "uuid": "1296618e-c8b6-471f-ada2-1c574d39baca", 00:10:12.668 "numa_id": 0, 00:10:12.668 "assigned_rate_limits": { 00:10:12.668 "rw_ios_per_sec": 0, 00:10:12.668 "rw_mbytes_per_sec": 0, 00:10:12.668 "r_mbytes_per_sec": 0, 00:10:12.668 "w_mbytes_per_sec": 0 00:10:12.668 }, 00:10:12.668 "claimed": false, 00:10:12.668 "zoned": false, 00:10:12.668 "supported_io_types": { 00:10:12.668 "read": true, 00:10:12.668 "write": true, 00:10:12.668 "unmap": true, 00:10:12.668 "flush": true, 00:10:12.668 "reset": true, 00:10:12.668 "nvme_admin": true, 00:10:12.668 "nvme_io": true, 00:10:12.668 "nvme_io_md": false, 00:10:12.668 "write_zeroes": true, 00:10:12.668 "zcopy": false, 00:10:12.668 "get_zone_info": false, 00:10:12.668 "zone_management": false, 00:10:12.668 "zone_append": false, 00:10:12.668 "compare": true, 00:10:12.668 "compare_and_write": true, 00:10:12.668 "abort": true, 00:10:12.668 "seek_hole": false, 00:10:12.668 "seek_data": false, 00:10:12.668 "copy": true, 00:10:12.668 "nvme_iov_md": false 00:10:12.668 }, 00:10:12.668 "memory_domains": [ 00:10:12.668 { 00:10:12.668 "dma_device_id": "system", 00:10:12.668 "dma_device_type": 1 00:10:12.668 } 00:10:12.668 ], 00:10:12.668 "driver_specific": { 00:10:12.668 "nvme": [ 00:10:12.668 { 00:10:12.668 "trid": { 00:10:12.668 "trtype": "TCP", 00:10:12.668 "adrfam": "IPv4", 00:10:12.668 "traddr": "10.0.0.2", 00:10:12.668 "trsvcid": "4420", 00:10:12.668 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:12.668 }, 00:10:12.668 "ctrlr_data": { 00:10:12.668 "cntlid": 1, 00:10:12.668 "vendor_id": "0x8086", 00:10:12.668 "model_number": "SPDK bdev Controller", 00:10:12.668 "serial_number": "SPDK0", 00:10:12.668 "firmware_revision": "25.01", 00:10:12.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:12.668 "oacs": { 00:10:12.668 "security": 0, 00:10:12.668 "format": 0, 00:10:12.668 "firmware": 0, 00:10:12.668 "ns_manage": 0 00:10:12.668 }, 00:10:12.668 "multi_ctrlr": true, 00:10:12.668 "ana_reporting": false 00:10:12.668 }, 00:10:12.668 "vs": { 00:10:12.668 "nvme_version": "1.3" 00:10:12.668 }, 00:10:12.668 "ns_data": { 00:10:12.668 "id": 1, 00:10:12.668 "can_share": true 00:10:12.668 } 00:10:12.668 } 00:10:12.668 ], 00:10:12.668 "mp_policy": "active_passive" 00:10:12.668 } 00:10:12.668 } 00:10:12.668 ] 00:10:12.926 08:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=734088 00:10:12.926 08:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:12.926 08:46:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:12.926 Running I/O for 10 seconds... 00:10:13.861 Latency(us) 00:10:13.861 [2024-11-06T07:46:27.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.861 Nvme0n1 : 1.00 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:10:13.861 [2024-11-06T07:46:27.150Z] =================================================================================================================== 00:10:13.861 [2024-11-06T07:46:27.150Z] Total : 15368.00 60.03 0.00 0.00 0.00 0.00 0.00 00:10:13.861 00:10:14.795 08:46:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:14.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.795 Nvme0n1 : 2.00 15621.50 61.02 0.00 0.00 0.00 0.00 0.00 00:10:14.795 [2024-11-06T07:46:28.084Z] =================================================================================================================== 00:10:14.795 [2024-11-06T07:46:28.084Z] Total : 15621.50 61.02 0.00 0.00 0.00 0.00 0.00 00:10:14.795 00:10:15.053 true 00:10:15.053 08:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:15.053 08:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:15.311 08:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:15.311 08:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:15.311 08:46:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 734088 00:10:15.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.878 Nvme0n1 : 3.00 15706.00 61.35 0.00 0.00 0.00 0.00 0.00 00:10:15.878 [2024-11-06T07:46:29.167Z] =================================================================================================================== 00:10:15.878 [2024-11-06T07:46:29.167Z] Total : 15706.00 61.35 0.00 0.00 0.00 0.00 0.00 00:10:15.878 00:10:16.812 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.812 Nvme0n1 : 4.00 15780.00 61.64 0.00 0.00 0.00 0.00 0.00 00:10:16.812 [2024-11-06T07:46:30.101Z] =================================================================================================================== 00:10:16.812 [2024-11-06T07:46:30.102Z] Total : 15780.00 61.64 0.00 0.00 0.00 0.00 0.00 00:10:16.813 00:10:18.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.188 Nvme0n1 : 5.00 15849.80 61.91 0.00 0.00 0.00 0.00 0.00 00:10:18.188 [2024-11-06T07:46:31.477Z] =================================================================================================================== 00:10:18.188 [2024-11-06T07:46:31.477Z] Total : 15849.80 61.91 0.00 0.00 0.00 0.00 0.00 00:10:18.188 00:10:19.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.122 Nvme0n1 : 6.00 15907.17 62.14 0.00 0.00 0.00 0.00 0.00 00:10:19.122 [2024-11-06T07:46:32.411Z] =================================================================================================================== 00:10:19.122 [2024-11-06T07:46:32.411Z] Total : 15907.17 62.14 0.00 0.00 0.00 0.00 0.00 00:10:19.122 00:10:20.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.056 Nvme0n1 : 7.00 15966.71 62.37 0.00 0.00 0.00 0.00 0.00 00:10:20.056 [2024-11-06T07:46:33.345Z] =================================================================================================================== 00:10:20.056 [2024-11-06T07:46:33.345Z] Total : 15966.71 62.37 0.00 0.00 0.00 0.00 0.00 00:10:20.056 00:10:20.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.990 Nvme0n1 : 8.00 15995.00 62.48 0.00 0.00 0.00 0.00 0.00 00:10:20.990 [2024-11-06T07:46:34.280Z] =================================================================================================================== 00:10:20.991 [2024-11-06T07:46:34.280Z] Total : 15995.00 62.48 0.00 0.00 0.00 0.00 0.00 00:10:20.991 00:10:21.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.923 Nvme0n1 : 9.00 16034.78 62.64 0.00 0.00 0.00 0.00 0.00 00:10:21.923 [2024-11-06T07:46:35.212Z] =================================================================================================================== 00:10:21.923 [2024-11-06T07:46:35.212Z] Total : 16034.78 62.64 0.00 0.00 0.00 0.00 0.00 00:10:21.923 00:10:22.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.856 Nvme0n1 : 10.00 16069.60 62.77 0.00 0.00 0.00 0.00 0.00 00:10:22.856 [2024-11-06T07:46:36.145Z] =================================================================================================================== 00:10:22.856 [2024-11-06T07:46:36.145Z] Total : 16069.60 62.77 0.00 0.00 0.00 0.00 0.00 00:10:22.856 00:10:22.856 00:10:22.856 Latency(us) 00:10:22.856 [2024-11-06T07:46:36.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.856 Nvme0n1 : 10.01 16070.48 62.78 0.00 0.00 7960.05 4296.25 16699.54 00:10:22.856 [2024-11-06T07:46:36.145Z] =================================================================================================================== 00:10:22.856 [2024-11-06T07:46:36.145Z] Total : 16070.48 62.78 0.00 0.00 7960.05 4296.25 16699.54 00:10:22.856 { 00:10:22.856 "results": [ 00:10:22.856 { 00:10:22.856 "job": "Nvme0n1", 00:10:22.856 "core_mask": "0x2", 00:10:22.856 "workload": "randwrite", 00:10:22.856 "status": "finished", 00:10:22.856 "queue_depth": 128, 00:10:22.856 "io_size": 4096, 00:10:22.856 "runtime": 10.007416, 00:10:22.856 "iops": 16070.482130452057, 00:10:22.856 "mibps": 62.77532082207835, 00:10:22.856 "io_failed": 0, 00:10:22.856 "io_timeout": 0, 00:10:22.856 "avg_latency_us": 7960.0534662782975, 00:10:22.856 "min_latency_us": 4296.248888888889, 00:10:22.856 "max_latency_us": 16699.543703703705 00:10:22.856 } 00:10:22.856 ], 00:10:22.856 "core_count": 1 00:10:22.856 } 00:10:22.856 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 733952 00:10:22.856 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 733952 ']' 00:10:22.856 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 733952 00:10:22.856 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:22.856 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.856 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 733952 00:10:23.114 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:23.114 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:23.114 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 733952' 00:10:23.114 killing process with pid 733952 00:10:23.114 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 733952 00:10:23.114 Received shutdown signal, test time was about 10.000000 seconds 00:10:23.114 00:10:23.114 Latency(us) 00:10:23.114 [2024-11-06T07:46:36.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:23.114 [2024-11-06T07:46:36.403Z] =================================================================================================================== 00:10:23.114 [2024-11-06T07:46:36.403Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:23.114 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 733952 00:10:23.114 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:23.372 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:23.938 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:23.938 08:46:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:23.938 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:23.938 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:23.938 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:24.196 [2024-11-06 08:46:37.430600] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:24.196 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:24.454 request: 00:10:24.454 { 00:10:24.454 "uuid": "61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9", 00:10:24.454 "method": "bdev_lvol_get_lvstores", 00:10:24.454 "req_id": 1 00:10:24.454 } 00:10:24.454 Got JSON-RPC error response 00:10:24.454 response: 00:10:24.454 { 00:10:24.454 "code": -19, 00:10:24.454 "message": "No such device" 00:10:24.454 } 00:10:24.454 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:24.454 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:24.454 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:24.454 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:24.454 08:46:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:24.712 aio_bdev 00:10:24.970 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1296618e-c8b6-471f-ada2-1c574d39baca 00:10:24.970 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=1296618e-c8b6-471f-ada2-1c574d39baca 00:10:24.970 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.970 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:24.970 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.970 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.970 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:25.229 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1296618e-c8b6-471f-ada2-1c574d39baca -t 2000 00:10:25.488 [ 00:10:25.488 { 00:10:25.488 "name": "1296618e-c8b6-471f-ada2-1c574d39baca", 00:10:25.488 "aliases": [ 00:10:25.488 "lvs/lvol" 00:10:25.488 ], 00:10:25.488 "product_name": "Logical Volume", 00:10:25.488 "block_size": 4096, 00:10:25.488 "num_blocks": 38912, 00:10:25.488 "uuid": "1296618e-c8b6-471f-ada2-1c574d39baca", 00:10:25.488 "assigned_rate_limits": { 00:10:25.488 "rw_ios_per_sec": 0, 00:10:25.488 "rw_mbytes_per_sec": 0, 00:10:25.488 "r_mbytes_per_sec": 0, 00:10:25.488 "w_mbytes_per_sec": 0 00:10:25.488 }, 00:10:25.488 "claimed": false, 00:10:25.488 "zoned": false, 00:10:25.488 "supported_io_types": { 00:10:25.488 "read": true, 00:10:25.488 "write": true, 00:10:25.488 "unmap": true, 00:10:25.488 "flush": false, 00:10:25.488 "reset": true, 00:10:25.488 "nvme_admin": false, 00:10:25.488 "nvme_io": false, 00:10:25.488 "nvme_io_md": false, 00:10:25.488 "write_zeroes": true, 00:10:25.488 "zcopy": false, 00:10:25.488 "get_zone_info": false, 00:10:25.488 "zone_management": false, 00:10:25.488 "zone_append": false, 00:10:25.488 "compare": false, 00:10:25.488 "compare_and_write": false, 00:10:25.488 "abort": false, 00:10:25.488 "seek_hole": true, 00:10:25.488 "seek_data": true, 00:10:25.488 "copy": false, 00:10:25.488 "nvme_iov_md": false 00:10:25.488 }, 00:10:25.488 "driver_specific": { 00:10:25.488 "lvol": { 00:10:25.488 "lvol_store_uuid": "61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9", 00:10:25.488 "base_bdev": "aio_bdev", 00:10:25.488 "thin_provision": false, 00:10:25.488 "num_allocated_clusters": 38, 00:10:25.488 "snapshot": false, 00:10:25.488 "clone": false, 00:10:25.488 "esnap_clone": false 00:10:25.488 } 00:10:25.488 } 00:10:25.488 } 00:10:25.488 ] 00:10:25.488 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:25.488 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:25.488 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:25.747 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:25.747 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:25.747 08:46:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:26.006 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:26.006 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1296618e-c8b6-471f-ada2-1c574d39baca 00:10:26.263 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61cd2b6c-4eb0-4b2e-8938-b9d44f1f6ae9 00:10:26.524 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:26.783 00:10:26.783 real 0m17.746s 00:10:26.783 user 0m17.337s 00:10:26.783 sys 0m1.810s 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:26.783 ************************************ 00:10:26.783 END TEST lvs_grow_clean 00:10:26.783 ************************************ 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:26.783 ************************************ 00:10:26.783 START TEST lvs_grow_dirty 00:10:26.783 ************************************ 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:26.783 08:46:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:27.041 08:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:27.041 08:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:27.300 08:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:27.300 08:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:27.300 08:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:27.866 08:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:27.866 08:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:27.866 08:46:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 04e7c774-9380-4583-96ea-2ffe7623ba00 lvol 150 00:10:27.866 08:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=13f3e78e-d4b7-49e5-94d9-0824fe73b397 00:10:27.866 08:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:27.866 08:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:28.124 [2024-11-06 08:46:41.371303] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:28.124 [2024-11-06 08:46:41.371384] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:28.124 true 00:10:28.124 08:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:28.124 08:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:28.382 08:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:28.382 08:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:28.640 08:46:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 13f3e78e-d4b7-49e5-94d9-0824fe73b397 00:10:29.206 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:29.206 [2024-11-06 08:46:42.446461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.206 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=736139 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 736139 /var/tmp/bdevperf.sock 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 736139 ']' 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:29.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:29.464 08:46:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:29.722 [2024-11-06 08:46:42.776392] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:29.722 [2024-11-06 08:46:42.776467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736139 ] 00:10:29.722 [2024-11-06 08:46:42.841094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.722 [2024-11-06 08:46:42.900638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.979 08:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.979 08:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:29.980 08:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:30.237 Nvme0n1 00:10:30.237 08:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:30.495 [ 00:10:30.495 { 00:10:30.495 "name": "Nvme0n1", 00:10:30.495 "aliases": [ 00:10:30.495 "13f3e78e-d4b7-49e5-94d9-0824fe73b397" 00:10:30.495 ], 00:10:30.495 "product_name": "NVMe disk", 00:10:30.495 "block_size": 4096, 00:10:30.495 "num_blocks": 38912, 00:10:30.495 "uuid": "13f3e78e-d4b7-49e5-94d9-0824fe73b397", 00:10:30.495 "numa_id": 0, 00:10:30.495 "assigned_rate_limits": { 00:10:30.495 "rw_ios_per_sec": 0, 00:10:30.495 "rw_mbytes_per_sec": 0, 00:10:30.495 "r_mbytes_per_sec": 0, 00:10:30.495 "w_mbytes_per_sec": 0 00:10:30.495 }, 00:10:30.495 "claimed": false, 00:10:30.495 "zoned": false, 00:10:30.495 "supported_io_types": { 00:10:30.495 "read": true, 00:10:30.495 "write": true, 00:10:30.495 "unmap": true, 00:10:30.495 "flush": true, 00:10:30.495 "reset": true, 00:10:30.495 "nvme_admin": true, 00:10:30.495 "nvme_io": true, 00:10:30.495 "nvme_io_md": false, 00:10:30.495 "write_zeroes": true, 00:10:30.495 "zcopy": false, 00:10:30.495 "get_zone_info": false, 00:10:30.495 "zone_management": false, 00:10:30.495 "zone_append": false, 00:10:30.495 "compare": true, 00:10:30.495 "compare_and_write": true, 00:10:30.495 "abort": true, 00:10:30.495 "seek_hole": false, 00:10:30.495 "seek_data": false, 00:10:30.495 "copy": true, 00:10:30.495 "nvme_iov_md": false 00:10:30.495 }, 00:10:30.495 "memory_domains": [ 00:10:30.495 { 00:10:30.495 "dma_device_id": "system", 00:10:30.495 "dma_device_type": 1 00:10:30.495 } 00:10:30.495 ], 00:10:30.495 "driver_specific": { 00:10:30.495 "nvme": [ 00:10:30.495 { 00:10:30.495 "trid": { 00:10:30.495 "trtype": "TCP", 00:10:30.495 "adrfam": "IPv4", 00:10:30.495 "traddr": "10.0.0.2", 00:10:30.495 "trsvcid": "4420", 00:10:30.496 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:30.496 }, 00:10:30.496 "ctrlr_data": { 00:10:30.496 "cntlid": 1, 00:10:30.496 "vendor_id": "0x8086", 00:10:30.496 "model_number": "SPDK bdev Controller", 00:10:30.496 "serial_number": "SPDK0", 00:10:30.496 "firmware_revision": "25.01", 00:10:30.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:30.496 "oacs": { 00:10:30.496 "security": 0, 00:10:30.496 "format": 0, 00:10:30.496 "firmware": 0, 00:10:30.496 "ns_manage": 0 00:10:30.496 }, 00:10:30.496 "multi_ctrlr": true, 00:10:30.496 "ana_reporting": false 00:10:30.496 }, 00:10:30.496 "vs": { 00:10:30.496 "nvme_version": "1.3" 00:10:30.496 }, 00:10:30.496 "ns_data": { 00:10:30.496 "id": 1, 00:10:30.496 "can_share": true 00:10:30.496 } 00:10:30.496 } 00:10:30.496 ], 00:10:30.496 "mp_policy": "active_passive" 00:10:30.496 } 00:10:30.496 } 00:10:30.496 ] 00:10:30.496 08:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=736275 00:10:30.496 08:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:30.496 08:46:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:30.496 Running I/O for 10 seconds... 00:10:31.869 Latency(us) 00:10:31.869 [2024-11-06T07:46:45.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.869 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:10:31.869 [2024-11-06T07:46:45.158Z] =================================================================================================================== 00:10:31.869 [2024-11-06T07:46:45.158Z] Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:10:31.869 00:10:32.435 08:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:32.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.693 Nvme0n1 : 2.00 15273.00 59.66 0.00 0.00 0.00 0.00 0.00 00:10:32.693 [2024-11-06T07:46:45.982Z] =================================================================================================================== 00:10:32.693 [2024-11-06T07:46:45.982Z] Total : 15273.00 59.66 0.00 0.00 0.00 0.00 0.00 00:10:32.693 00:10:32.693 true 00:10:32.693 08:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:32.693 08:46:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:32.951 08:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:32.951 08:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:32.951 08:46:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 736275 00:10:33.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.517 Nvme0n1 : 3.00 15389.00 60.11 0.00 0.00 0.00 0.00 0.00 00:10:33.517 [2024-11-06T07:46:46.806Z] =================================================================================================================== 00:10:33.517 [2024-11-06T07:46:46.806Z] Total : 15389.00 60.11 0.00 0.00 0.00 0.00 0.00 00:10:33.517 00:10:34.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.890 Nvme0n1 : 4.00 15479.50 60.47 0.00 0.00 0.00 0.00 0.00 00:10:34.890 [2024-11-06T07:46:48.179Z] =================================================================================================================== 00:10:34.890 [2024-11-06T07:46:48.179Z] Total : 15479.50 60.47 0.00 0.00 0.00 0.00 0.00 00:10:34.890 00:10:35.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.457 Nvme0n1 : 5.00 15558.60 60.78 0.00 0.00 0.00 0.00 0.00 00:10:35.457 [2024-11-06T07:46:48.746Z] =================================================================================================================== 00:10:35.457 [2024-11-06T07:46:48.746Z] Total : 15558.60 60.78 0.00 0.00 0.00 0.00 0.00 00:10:35.457 00:10:36.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.831 Nvme0n1 : 6.00 15622.00 61.02 0.00 0.00 0.00 0.00 0.00 00:10:36.831 [2024-11-06T07:46:50.120Z] =================================================================================================================== 00:10:36.831 [2024-11-06T07:46:50.120Z] Total : 15622.00 61.02 0.00 0.00 0.00 0.00 0.00 00:10:36.831 00:10:37.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.765 Nvme0n1 : 7.00 15649.00 61.13 0.00 0.00 0.00 0.00 0.00 00:10:37.765 [2024-11-06T07:46:51.054Z] =================================================================================================================== 00:10:37.765 [2024-11-06T07:46:51.054Z] Total : 15649.00 61.13 0.00 0.00 0.00 0.00 0.00 00:10:37.765 00:10:38.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.697 Nvme0n1 : 8.00 15693.12 61.30 0.00 0.00 0.00 0.00 0.00 00:10:38.697 [2024-11-06T07:46:51.986Z] =================================================================================================================== 00:10:38.697 [2024-11-06T07:46:51.987Z] Total : 15693.12 61.30 0.00 0.00 0.00 0.00 0.00 00:10:38.698 00:10:39.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.631 Nvme0n1 : 9.00 15727.44 61.44 0.00 0.00 0.00 0.00 0.00 00:10:39.631 [2024-11-06T07:46:52.920Z] =================================================================================================================== 00:10:39.631 [2024-11-06T07:46:52.920Z] Total : 15727.44 61.44 0.00 0.00 0.00 0.00 0.00 00:10:39.631 00:10:40.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.567 Nvme0n1 : 10.00 15761.30 61.57 0.00 0.00 0.00 0.00 0.00 00:10:40.567 [2024-11-06T07:46:53.856Z] =================================================================================================================== 00:10:40.567 [2024-11-06T07:46:53.856Z] Total : 15761.30 61.57 0.00 0.00 0.00 0.00 0.00 00:10:40.567 00:10:40.567 00:10:40.567 Latency(us) 00:10:40.567 [2024-11-06T07:46:53.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:40.567 Nvme0n1 : 10.01 15767.27 61.59 0.00 0.00 8113.33 2912.71 15728.64 00:10:40.567 [2024-11-06T07:46:53.856Z] =================================================================================================================== 00:10:40.567 [2024-11-06T07:46:53.856Z] Total : 15767.27 61.59 0.00 0.00 8113.33 2912.71 15728.64 00:10:40.567 { 00:10:40.567 "results": [ 00:10:40.567 { 00:10:40.567 "job": "Nvme0n1", 00:10:40.567 "core_mask": "0x2", 00:10:40.567 "workload": "randwrite", 00:10:40.567 "status": "finished", 00:10:40.567 "queue_depth": 128, 00:10:40.567 "io_size": 4096, 00:10:40.567 "runtime": 10.008326, 00:10:40.567 "iops": 15767.27216919193, 00:10:40.567 "mibps": 61.59090691090598, 00:10:40.567 "io_failed": 0, 00:10:40.567 "io_timeout": 0, 00:10:40.567 "avg_latency_us": 8113.331645125647, 00:10:40.567 "min_latency_us": 2912.711111111111, 00:10:40.567 "max_latency_us": 15728.64 00:10:40.567 } 00:10:40.567 ], 00:10:40.567 "core_count": 1 00:10:40.567 } 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 736139 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 736139 ']' 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 736139 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 736139 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 736139' 00:10:40.567 killing process with pid 736139 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 736139 00:10:40.567 Received shutdown signal, test time was about 10.000000 seconds 00:10:40.567 00:10:40.567 Latency(us) 00:10:40.567 [2024-11-06T07:46:53.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:40.567 [2024-11-06T07:46:53.856Z] =================================================================================================================== 00:10:40.567 [2024-11-06T07:46:53.856Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:40.567 08:46:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 736139 00:10:40.825 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:41.083 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:41.341 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:41.341 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:41.599 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:41.599 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:41.600 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 733512 00:10:41.600 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 733512 00:10:41.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 733512 Killed "${NVMF_APP[@]}" "$@" 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=737615 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 737615 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 737615 ']' 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.859 08:46:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:41.859 [2024-11-06 08:46:54.953877] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:41.859 [2024-11-06 08:46:54.953951] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.859 [2024-11-06 08:46:55.023220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.859 [2024-11-06 08:46:55.075418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.859 [2024-11-06 08:46:55.075474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.859 [2024-11-06 08:46:55.075496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.859 [2024-11-06 08:46:55.075507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.859 [2024-11-06 08:46:55.075517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.859 [2024-11-06 08:46:55.076059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.117 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.117 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:10:42.117 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:42.117 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.117 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:42.117 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.117 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:42.375 [2024-11-06 08:46:55.463542] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:42.375 [2024-11-06 08:46:55.463670] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:42.375 [2024-11-06 08:46:55.463718] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:42.375 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:42.375 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 13f3e78e-d4b7-49e5-94d9-0824fe73b397 00:10:42.375 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=13f3e78e-d4b7-49e5-94d9-0824fe73b397 00:10:42.375 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:42.375 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:42.375 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:42.375 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:42.375 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:42.633 08:46:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 13f3e78e-d4b7-49e5-94d9-0824fe73b397 -t 2000 00:10:42.891 [ 00:10:42.891 { 00:10:42.891 "name": "13f3e78e-d4b7-49e5-94d9-0824fe73b397", 00:10:42.891 "aliases": [ 00:10:42.891 "lvs/lvol" 00:10:42.891 ], 00:10:42.891 "product_name": "Logical Volume", 00:10:42.891 "block_size": 4096, 00:10:42.891 "num_blocks": 38912, 00:10:42.891 "uuid": "13f3e78e-d4b7-49e5-94d9-0824fe73b397", 00:10:42.891 "assigned_rate_limits": { 00:10:42.891 "rw_ios_per_sec": 0, 00:10:42.891 "rw_mbytes_per_sec": 0, 00:10:42.891 "r_mbytes_per_sec": 0, 00:10:42.891 "w_mbytes_per_sec": 0 00:10:42.891 }, 00:10:42.891 "claimed": false, 00:10:42.891 "zoned": false, 00:10:42.891 "supported_io_types": { 00:10:42.891 "read": true, 00:10:42.891 "write": true, 00:10:42.891 "unmap": true, 00:10:42.891 "flush": false, 00:10:42.891 "reset": true, 00:10:42.891 "nvme_admin": false, 00:10:42.891 "nvme_io": false, 00:10:42.891 "nvme_io_md": false, 00:10:42.891 "write_zeroes": true, 00:10:42.891 "zcopy": false, 00:10:42.891 "get_zone_info": false, 00:10:42.891 "zone_management": false, 00:10:42.891 "zone_append": false, 00:10:42.891 "compare": false, 00:10:42.891 "compare_and_write": false, 00:10:42.891 "abort": false, 00:10:42.891 "seek_hole": true, 00:10:42.891 "seek_data": true, 00:10:42.891 "copy": false, 00:10:42.891 "nvme_iov_md": false 00:10:42.891 }, 00:10:42.891 "driver_specific": { 00:10:42.891 "lvol": { 00:10:42.891 "lvol_store_uuid": "04e7c774-9380-4583-96ea-2ffe7623ba00", 00:10:42.891 "base_bdev": "aio_bdev", 00:10:42.891 "thin_provision": false, 00:10:42.891 "num_allocated_clusters": 38, 00:10:42.891 "snapshot": false, 00:10:42.891 "clone": false, 00:10:42.891 "esnap_clone": false 00:10:42.891 } 00:10:42.891 } 00:10:42.891 } 00:10:42.891 ] 00:10:42.891 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:42.891 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:42.891 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:43.150 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:43.150 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:43.150 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:43.409 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:43.409 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:43.670 [2024-11-06 08:46:56.817281] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:43.670 08:46:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:43.928 request: 00:10:43.928 { 00:10:43.928 "uuid": "04e7c774-9380-4583-96ea-2ffe7623ba00", 00:10:43.928 "method": "bdev_lvol_get_lvstores", 00:10:43.928 "req_id": 1 00:10:43.928 } 00:10:43.928 Got JSON-RPC error response 00:10:43.928 response: 00:10:43.928 { 00:10:43.928 "code": -19, 00:10:43.928 "message": "No such device" 00:10:43.928 } 00:10:43.928 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:10:43.928 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:43.928 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:43.928 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:43.928 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:44.186 aio_bdev 00:10:44.186 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 13f3e78e-d4b7-49e5-94d9-0824fe73b397 00:10:44.186 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=13f3e78e-d4b7-49e5-94d9-0824fe73b397 00:10:44.186 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:44.186 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:10:44.186 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:44.186 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:44.186 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:44.444 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 13f3e78e-d4b7-49e5-94d9-0824fe73b397 -t 2000 00:10:44.702 [ 00:10:44.702 { 00:10:44.702 "name": "13f3e78e-d4b7-49e5-94d9-0824fe73b397", 00:10:44.702 "aliases": [ 00:10:44.702 "lvs/lvol" 00:10:44.702 ], 00:10:44.702 "product_name": "Logical Volume", 00:10:44.702 "block_size": 4096, 00:10:44.702 "num_blocks": 38912, 00:10:44.702 "uuid": "13f3e78e-d4b7-49e5-94d9-0824fe73b397", 00:10:44.702 "assigned_rate_limits": { 00:10:44.702 "rw_ios_per_sec": 0, 00:10:44.702 "rw_mbytes_per_sec": 0, 00:10:44.702 "r_mbytes_per_sec": 0, 00:10:44.702 "w_mbytes_per_sec": 0 00:10:44.702 }, 00:10:44.702 "claimed": false, 00:10:44.702 "zoned": false, 00:10:44.702 "supported_io_types": { 00:10:44.702 "read": true, 00:10:44.702 "write": true, 00:10:44.702 "unmap": true, 00:10:44.703 "flush": false, 00:10:44.703 "reset": true, 00:10:44.703 "nvme_admin": false, 00:10:44.703 "nvme_io": false, 00:10:44.703 "nvme_io_md": false, 00:10:44.703 "write_zeroes": true, 00:10:44.703 "zcopy": false, 00:10:44.703 "get_zone_info": false, 00:10:44.703 "zone_management": false, 00:10:44.703 "zone_append": false, 00:10:44.703 "compare": false, 00:10:44.703 "compare_and_write": false, 00:10:44.703 "abort": false, 00:10:44.703 "seek_hole": true, 00:10:44.703 "seek_data": true, 00:10:44.703 "copy": false, 00:10:44.703 "nvme_iov_md": false 00:10:44.703 }, 00:10:44.703 "driver_specific": { 00:10:44.703 "lvol": { 00:10:44.703 "lvol_store_uuid": "04e7c774-9380-4583-96ea-2ffe7623ba00", 00:10:44.703 "base_bdev": "aio_bdev", 00:10:44.703 "thin_provision": false, 00:10:44.703 "num_allocated_clusters": 38, 00:10:44.703 "snapshot": false, 00:10:44.703 "clone": false, 00:10:44.703 "esnap_clone": false 00:10:44.703 } 00:10:44.703 } 00:10:44.703 } 00:10:44.703 ] 00:10:44.703 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:10:44.703 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:44.703 08:46:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:44.961 08:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:44.961 08:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:44.961 08:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:45.218 08:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:45.218 08:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 13f3e78e-d4b7-49e5-94d9-0824fe73b397 00:10:45.476 08:46:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04e7c774-9380-4583-96ea-2ffe7623ba00 00:10:46.043 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:46.043 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:46.043 00:10:46.043 real 0m19.338s 00:10:46.043 user 0m49.180s 00:10:46.043 sys 0m4.527s 00:10:46.043 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.043 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:46.043 ************************************ 00:10:46.043 END TEST lvs_grow_dirty 00:10:46.043 ************************************ 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:46.301 nvmf_trace.0 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.301 rmmod nvme_tcp 00:10:46.301 rmmod nvme_fabrics 00:10:46.301 rmmod nvme_keyring 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 737615 ']' 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 737615 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 737615 ']' 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 737615 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.301 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 737615 00:10:46.302 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.302 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.302 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 737615' 00:10:46.302 killing process with pid 737615 00:10:46.302 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 737615 00:10:46.302 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 737615 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.559 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.560 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.560 08:46:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.570 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:48.570 00:10:48.570 real 0m42.672s 00:10:48.570 user 1m12.598s 00:10:48.570 sys 0m8.322s 00:10:48.570 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.570 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 ************************************ 00:10:48.570 END TEST nvmf_lvs_grow 00:10:48.570 ************************************ 00:10:48.570 08:47:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:48.570 08:47:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.570 08:47:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.570 08:47:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.570 ************************************ 00:10:48.570 START TEST nvmf_bdev_io_wait 00:10:48.570 ************************************ 00:10:48.570 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:48.890 * Looking for test storage... 00:10:48.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lcov --version 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:48.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.890 --rc genhtml_branch_coverage=1 00:10:48.890 --rc genhtml_function_coverage=1 00:10:48.890 --rc genhtml_legend=1 00:10:48.890 --rc geninfo_all_blocks=1 00:10:48.890 --rc geninfo_unexecuted_blocks=1 00:10:48.890 00:10:48.890 ' 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:48.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.890 --rc genhtml_branch_coverage=1 00:10:48.890 --rc genhtml_function_coverage=1 00:10:48.890 --rc genhtml_legend=1 00:10:48.890 --rc geninfo_all_blocks=1 00:10:48.890 --rc geninfo_unexecuted_blocks=1 00:10:48.890 00:10:48.890 ' 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:48.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.890 --rc genhtml_branch_coverage=1 00:10:48.890 --rc genhtml_function_coverage=1 00:10:48.890 --rc genhtml_legend=1 00:10:48.890 --rc geninfo_all_blocks=1 00:10:48.890 --rc geninfo_unexecuted_blocks=1 00:10:48.890 00:10:48.890 ' 00:10:48.890 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:48.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.890 --rc genhtml_branch_coverage=1 00:10:48.890 --rc genhtml_function_coverage=1 00:10:48.890 --rc genhtml_legend=1 00:10:48.891 --rc geninfo_all_blocks=1 00:10:48.891 --rc geninfo_unexecuted_blocks=1 00:10:48.891 00:10:48.891 ' 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:48.891 08:47:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.422 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:51.423 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:51.423 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:51.423 Found net devices under 0000:09:00.0: cvl_0_0 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:51.423 Found net devices under 0000:09:00.1: cvl_0_1 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:10:51.423 00:10:51.423 --- 10.0.0.2 ping statistics --- 00:10:51.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.423 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:10:51.423 00:10:51.423 --- 10.0.0.1 ping statistics --- 00:10:51.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.423 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=740158 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 740158 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 740158 ']' 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.423 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.423 [2024-11-06 08:47:04.392006] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:51.423 [2024-11-06 08:47:04.392087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.423 [2024-11-06 08:47:04.464425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.423 [2024-11-06 08:47:04.521775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.423 [2024-11-06 08:47:04.521862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.423 [2024-11-06 08:47:04.521877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.423 [2024-11-06 08:47:04.521902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.424 [2024-11-06 08:47:04.521912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.424 [2024-11-06 08:47:04.523520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.424 [2024-11-06 08:47:04.523626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.424 [2024-11-06 08:47:04.523724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.424 [2024-11-06 08:47:04.523720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.424 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.682 [2024-11-06 08:47:04.718533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.682 Malloc0 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:51.682 [2024-11-06 08:47:04.771431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=740190 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=740191 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=740193 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:51.682 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:51.682 { 00:10:51.682 "params": { 00:10:51.682 "name": "Nvme$subsystem", 00:10:51.682 "trtype": "$TEST_TRANSPORT", 00:10:51.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.682 "adrfam": "ipv4", 00:10:51.682 "trsvcid": "$NVMF_PORT", 00:10:51.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.683 "hdgst": ${hdgst:-false}, 00:10:51.683 "ddgst": ${ddgst:-false} 00:10:51.683 }, 00:10:51.683 "method": "bdev_nvme_attach_controller" 00:10:51.683 } 00:10:51.683 EOF 00:10:51.683 )") 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=740196 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:51.683 { 00:10:51.683 "params": { 00:10:51.683 "name": "Nvme$subsystem", 00:10:51.683 "trtype": "$TEST_TRANSPORT", 00:10:51.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.683 "adrfam": "ipv4", 00:10:51.683 "trsvcid": "$NVMF_PORT", 00:10:51.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.683 "hdgst": ${hdgst:-false}, 00:10:51.683 "ddgst": ${ddgst:-false} 00:10:51.683 }, 00:10:51.683 "method": "bdev_nvme_attach_controller" 00:10:51.683 } 00:10:51.683 EOF 00:10:51.683 )") 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:51.683 { 00:10:51.683 "params": { 00:10:51.683 "name": "Nvme$subsystem", 00:10:51.683 "trtype": "$TEST_TRANSPORT", 00:10:51.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.683 "adrfam": "ipv4", 00:10:51.683 "trsvcid": "$NVMF_PORT", 00:10:51.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.683 "hdgst": ${hdgst:-false}, 00:10:51.683 "ddgst": ${ddgst:-false} 00:10:51.683 }, 00:10:51.683 "method": "bdev_nvme_attach_controller" 00:10:51.683 } 00:10:51.683 EOF 00:10:51.683 )") 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:51.683 { 00:10:51.683 "params": { 00:10:51.683 "name": "Nvme$subsystem", 00:10:51.683 "trtype": "$TEST_TRANSPORT", 00:10:51.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.683 "adrfam": "ipv4", 00:10:51.683 "trsvcid": "$NVMF_PORT", 00:10:51.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.683 "hdgst": ${hdgst:-false}, 00:10:51.683 "ddgst": ${ddgst:-false} 00:10:51.683 }, 00:10:51.683 "method": "bdev_nvme_attach_controller" 00:10:51.683 } 00:10:51.683 EOF 00:10:51.683 )") 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 740190 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:51.683 "params": { 00:10:51.683 "name": "Nvme1", 00:10:51.683 "trtype": "tcp", 00:10:51.683 "traddr": "10.0.0.2", 00:10:51.683 "adrfam": "ipv4", 00:10:51.683 "trsvcid": "4420", 00:10:51.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.683 "hdgst": false, 00:10:51.683 "ddgst": false 00:10:51.683 }, 00:10:51.683 "method": "bdev_nvme_attach_controller" 00:10:51.683 }' 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:51.683 "params": { 00:10:51.683 "name": "Nvme1", 00:10:51.683 "trtype": "tcp", 00:10:51.683 "traddr": "10.0.0.2", 00:10:51.683 "adrfam": "ipv4", 00:10:51.683 "trsvcid": "4420", 00:10:51.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.683 "hdgst": false, 00:10:51.683 "ddgst": false 00:10:51.683 }, 00:10:51.683 "method": "bdev_nvme_attach_controller" 00:10:51.683 }' 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:51.683 "params": { 00:10:51.683 "name": "Nvme1", 00:10:51.683 "trtype": "tcp", 00:10:51.683 "traddr": "10.0.0.2", 00:10:51.683 "adrfam": "ipv4", 00:10:51.683 "trsvcid": "4420", 00:10:51.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.683 "hdgst": false, 00:10:51.683 "ddgst": false 00:10:51.683 }, 00:10:51.683 "method": "bdev_nvme_attach_controller" 00:10:51.683 }' 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:10:51.683 08:47:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:51.683 "params": { 00:10:51.683 "name": "Nvme1", 00:10:51.683 "trtype": "tcp", 00:10:51.683 "traddr": "10.0.0.2", 00:10:51.683 "adrfam": "ipv4", 00:10:51.683 "trsvcid": "4420", 00:10:51.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.683 "hdgst": false, 00:10:51.683 "ddgst": false 00:10:51.683 }, 00:10:51.683 "method": "bdev_nvme_attach_controller" 00:10:51.683 }' 00:10:51.683 [2024-11-06 08:47:04.823452] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:51.683 [2024-11-06 08:47:04.823452] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:51.683 [2024-11-06 08:47:04.823452] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:51.683 [2024-11-06 08:47:04.823539] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 08:47:04.823539] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 08:47:04.823540] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:51.683 --proc-type=auto ] 00:10:51.683 --proc-type=auto ] 00:10:51.683 [2024-11-06 08:47:04.823678] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:51.683 [2024-11-06 08:47:04.823747] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:51.941 [2024-11-06 08:47:05.017333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.941 [2024-11-06 08:47:05.069616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:51.941 [2024-11-06 08:47:05.120165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.941 [2024-11-06 08:47:05.176237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:51.941 [2024-11-06 08:47:05.223900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.198 [2024-11-06 08:47:05.281647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:52.199 [2024-11-06 08:47:05.301024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.199 [2024-11-06 08:47:05.351323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:52.199 Running I/O for 1 seconds... 00:10:52.456 Running I/O for 1 seconds... 00:10:52.456 Running I/O for 1 seconds... 00:10:52.456 Running I/O for 1 seconds... 00:10:53.391 11989.00 IOPS, 46.83 MiB/s 00:10:53.391 Latency(us) 00:10:53.391 [2024-11-06T07:47:06.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.391 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:53.391 Nvme1n1 : 1.01 12044.04 47.05 0.00 0.00 10589.21 5121.52 18155.90 00:10:53.391 [2024-11-06T07:47:06.680Z] =================================================================================================================== 00:10:53.391 [2024-11-06T07:47:06.680Z] Total : 12044.04 47.05 0.00 0.00 10589.21 5121.52 18155.90 00:10:53.391 4493.00 IOPS, 17.55 MiB/s 00:10:53.391 Latency(us) 00:10:53.391 [2024-11-06T07:47:06.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.391 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:53.391 Nvme1n1 : 1.03 4508.59 17.61 0.00 0.00 28071.77 7524.50 37671.06 00:10:53.391 [2024-11-06T07:47:06.680Z] =================================================================================================================== 00:10:53.391 [2024-11-06T07:47:06.680Z] Total : 4508.59 17.61 0.00 0.00 28071.77 7524.50 37671.06 00:10:53.391 195664.00 IOPS, 764.31 MiB/s 00:10:53.391 Latency(us) 00:10:53.391 [2024-11-06T07:47:06.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.391 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:53.391 Nvme1n1 : 1.00 195288.26 762.84 0.00 0.00 651.89 317.06 1893.26 00:10:53.391 [2024-11-06T07:47:06.680Z] =================================================================================================================== 00:10:53.391 [2024-11-06T07:47:06.680Z] Total : 195288.26 762.84 0.00 0.00 651.89 317.06 1893.26 00:10:53.391 4443.00 IOPS, 17.36 MiB/s 00:10:53.391 Latency(us) 00:10:53.391 [2024-11-06T07:47:06.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:53.391 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:53.391 Nvme1n1 : 1.01 4537.54 17.72 0.00 0.00 28086.57 6699.24 53982.25 00:10:53.391 [2024-11-06T07:47:06.680Z] =================================================================================================================== 00:10:53.391 [2024-11-06T07:47:06.680Z] Total : 4537.54 17.72 0.00 0.00 28086.57 6699.24 53982.25 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 740191 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 740193 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 740196 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:53.650 rmmod nvme_tcp 00:10:53.650 rmmod nvme_fabrics 00:10:53.650 rmmod nvme_keyring 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 740158 ']' 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 740158 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 740158 ']' 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 740158 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 740158 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 740158' 00:10:53.650 killing process with pid 740158 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 740158 00:10:53.650 08:47:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 740158 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.910 08:47:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:56.447 00:10:56.447 real 0m7.354s 00:10:56.447 user 0m16.303s 00:10:56.447 sys 0m3.570s 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:56.447 ************************************ 00:10:56.447 END TEST nvmf_bdev_io_wait 00:10:56.447 ************************************ 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:56.447 ************************************ 00:10:56.447 START TEST nvmf_queue_depth 00:10:56.447 ************************************ 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:56.447 * Looking for test storage... 00:10:56.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lcov --version 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.447 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:10:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.448 --rc genhtml_branch_coverage=1 00:10:56.448 --rc genhtml_function_coverage=1 00:10:56.448 --rc genhtml_legend=1 00:10:56.448 --rc geninfo_all_blocks=1 00:10:56.448 --rc geninfo_unexecuted_blocks=1 00:10:56.448 00:10:56.448 ' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:10:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.448 --rc genhtml_branch_coverage=1 00:10:56.448 --rc genhtml_function_coverage=1 00:10:56.448 --rc genhtml_legend=1 00:10:56.448 --rc geninfo_all_blocks=1 00:10:56.448 --rc geninfo_unexecuted_blocks=1 00:10:56.448 00:10:56.448 ' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:10:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.448 --rc genhtml_branch_coverage=1 00:10:56.448 --rc genhtml_function_coverage=1 00:10:56.448 --rc genhtml_legend=1 00:10:56.448 --rc geninfo_all_blocks=1 00:10:56.448 --rc geninfo_unexecuted_blocks=1 00:10:56.448 00:10:56.448 ' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:10:56.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.448 --rc genhtml_branch_coverage=1 00:10:56.448 --rc genhtml_function_coverage=1 00:10:56.448 --rc genhtml_legend=1 00:10:56.448 --rc geninfo_all_blocks=1 00:10:56.448 --rc geninfo_unexecuted_blocks=1 00:10:56.448 00:10:56.448 ' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:56.448 08:47:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:58.350 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.350 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:58.351 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:58.351 Found net devices under 0000:09:00.0: cvl_0_0 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:58.351 Found net devices under 0000:09:00.1: cvl_0_1 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:58.351 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:58.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:10:58.610 00:10:58.610 --- 10.0.0.2 ping statistics --- 00:10:58.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.610 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:58.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:10:58.610 00:10:58.610 --- 10.0.0.1 ping statistics --- 00:10:58.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.610 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=742434 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 742434 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 742434 ']' 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.610 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.610 [2024-11-06 08:47:11.760418] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:58.610 [2024-11-06 08:47:11.760507] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.610 [2024-11-06 08:47:11.837580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.610 [2024-11-06 08:47:11.889680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.610 [2024-11-06 08:47:11.889736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.610 [2024-11-06 08:47:11.889764] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.610 [2024-11-06 08:47:11.889775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.610 [2024-11-06 08:47:11.889785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.610 [2024-11-06 08:47:11.890382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.868 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:58.868 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:58.868 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:58.868 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:58.868 08:47:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.868 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:58.868 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:58.868 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.868 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.868 [2024-11-06 08:47:12.027365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:58.868 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.868 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:58.868 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.869 Malloc0 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.869 [2024-11-06 08:47:12.075431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=742569 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 742569 /var/tmp/bdevperf.sock 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 742569 ']' 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:58.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:58.869 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.869 [2024-11-06 08:47:12.120898] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:10:58.869 [2024-11-06 08:47:12.120975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742569 ] 00:10:59.127 [2024-11-06 08:47:12.186193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.127 [2024-11-06 08:47:12.242767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.127 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.127 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:59.127 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:59.127 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.127 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:59.385 NVMe0n1 00:10:59.385 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.385 08:47:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:59.385 Running I/O for 10 seconds... 00:11:01.692 8192.00 IOPS, 32.00 MiB/s [2024-11-06T07:47:15.548Z] 8660.00 IOPS, 33.83 MiB/s [2024-11-06T07:47:16.921Z] 8612.00 IOPS, 33.64 MiB/s [2024-11-06T07:47:17.855Z] 8701.75 IOPS, 33.99 MiB/s [2024-11-06T07:47:18.789Z] 8689.00 IOPS, 33.94 MiB/s [2024-11-06T07:47:19.724Z] 8702.83 IOPS, 34.00 MiB/s [2024-11-06T07:47:20.657Z] 8763.86 IOPS, 34.23 MiB/s [2024-11-06T07:47:21.593Z] 8786.88 IOPS, 34.32 MiB/s [2024-11-06T07:47:22.967Z] 8772.78 IOPS, 34.27 MiB/s [2024-11-06T07:47:22.967Z] 8799.40 IOPS, 34.37 MiB/s 00:11:09.678 Latency(us) 00:11:09.678 [2024-11-06T07:47:22.967Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.678 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:09.678 Verification LBA range: start 0x0 length 0x4000 00:11:09.678 NVMe0n1 : 10.08 8828.91 34.49 0.00 0.00 115534.68 20874.43 69128.34 00:11:09.678 [2024-11-06T07:47:22.967Z] =================================================================================================================== 00:11:09.678 [2024-11-06T07:47:22.967Z] Total : 8828.91 34.49 0.00 0.00 115534.68 20874.43 69128.34 00:11:09.678 { 00:11:09.678 "results": [ 00:11:09.678 { 00:11:09.678 "job": "NVMe0n1", 00:11:09.679 "core_mask": "0x1", 00:11:09.679 "workload": "verify", 00:11:09.679 "status": "finished", 00:11:09.679 "verify_range": { 00:11:09.679 "start": 0, 00:11:09.679 "length": 16384 00:11:09.679 }, 00:11:09.679 "queue_depth": 1024, 00:11:09.679 "io_size": 4096, 00:11:09.679 "runtime": 10.082557, 00:11:09.679 "iops": 8828.911158151648, 00:11:09.679 "mibps": 34.487934211529875, 00:11:09.679 "io_failed": 0, 00:11:09.679 "io_timeout": 0, 00:11:09.679 "avg_latency_us": 115534.68151669699, 00:11:09.679 "min_latency_us": 20874.42962962963, 00:11:09.679 "max_latency_us": 69128.34370370371 00:11:09.679 } 00:11:09.679 ], 00:11:09.679 "core_count": 1 00:11:09.679 } 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 742569 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 742569 ']' 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 742569 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742569 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742569' 00:11:09.679 killing process with pid 742569 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 742569 00:11:09.679 Received shutdown signal, test time was about 10.000000 seconds 00:11:09.679 00:11:09.679 Latency(us) 00:11:09.679 [2024-11-06T07:47:22.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.679 [2024-11-06T07:47:22.968Z] =================================================================================================================== 00:11:09.679 [2024-11-06T07:47:22.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 742569 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.679 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.679 rmmod nvme_tcp 00:11:09.679 rmmod nvme_fabrics 00:11:09.937 rmmod nvme_keyring 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 742434 ']' 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 742434 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 742434 ']' 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 742434 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.937 08:47:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742434 00:11:09.937 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:09.937 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:09.937 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742434' 00:11:09.937 killing process with pid 742434 00:11:09.937 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 742434 00:11:09.937 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 742434 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.197 08:47:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.107 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.107 00:11:12.107 real 0m16.108s 00:11:12.107 user 0m22.461s 00:11:12.107 sys 0m3.151s 00:11:12.107 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.107 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:12.107 ************************************ 00:11:12.107 END TEST nvmf_queue_depth 00:11:12.107 ************************************ 00:11:12.107 08:47:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:12.107 08:47:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:12.108 08:47:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.108 08:47:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.108 ************************************ 00:11:12.108 START TEST nvmf_target_multipath 00:11:12.108 ************************************ 00:11:12.108 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:12.367 * Looking for test storage... 00:11:12.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lcov --version 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:12.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.367 --rc genhtml_branch_coverage=1 00:11:12.367 --rc genhtml_function_coverage=1 00:11:12.367 --rc genhtml_legend=1 00:11:12.367 --rc geninfo_all_blocks=1 00:11:12.367 --rc geninfo_unexecuted_blocks=1 00:11:12.367 00:11:12.367 ' 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:12.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.367 --rc genhtml_branch_coverage=1 00:11:12.367 --rc genhtml_function_coverage=1 00:11:12.367 --rc genhtml_legend=1 00:11:12.367 --rc geninfo_all_blocks=1 00:11:12.367 --rc geninfo_unexecuted_blocks=1 00:11:12.367 00:11:12.367 ' 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:12.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.367 --rc genhtml_branch_coverage=1 00:11:12.367 --rc genhtml_function_coverage=1 00:11:12.367 --rc genhtml_legend=1 00:11:12.367 --rc geninfo_all_blocks=1 00:11:12.367 --rc geninfo_unexecuted_blocks=1 00:11:12.367 00:11:12.367 ' 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:12.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.367 --rc genhtml_branch_coverage=1 00:11:12.367 --rc genhtml_function_coverage=1 00:11:12.367 --rc genhtml_legend=1 00:11:12.367 --rc geninfo_all_blocks=1 00:11:12.367 --rc geninfo_unexecuted_blocks=1 00:11:12.367 00:11:12.367 ' 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:12.367 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.368 08:47:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.904 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:14.905 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:14.905 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:14.905 Found net devices under 0000:09:00.0: cvl_0_0 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:14.905 Found net devices under 0000:09:00.1: cvl_0_1 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.905 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:11:14.906 00:11:14.906 --- 10.0.0.2 ping statistics --- 00:11:14.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.906 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:11:14.906 00:11:14.906 --- 10.0.0.1 ping statistics --- 00:11:14.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.906 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:14.906 only one NIC for nvmf test 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.906 rmmod nvme_tcp 00:11:14.906 rmmod nvme_fabrics 00:11:14.906 rmmod nvme_keyring 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.906 08:47:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:16.811 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.812 00:11:16.812 real 0m4.626s 00:11:16.812 user 0m0.942s 00:11:16.812 sys 0m1.699s 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.812 08:47:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:16.812 ************************************ 00:11:16.812 END TEST nvmf_target_multipath 00:11:16.812 ************************************ 00:11:16.812 08:47:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:16.812 08:47:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:16.812 08:47:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.812 08:47:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:16.812 ************************************ 00:11:16.812 START TEST nvmf_zcopy 00:11:16.812 ************************************ 00:11:16.812 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:16.812 * Looking for test storage... 00:11:16.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.812 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:16.812 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lcov --version 00:11:16.812 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:17.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.071 --rc genhtml_branch_coverage=1 00:11:17.071 --rc genhtml_function_coverage=1 00:11:17.071 --rc genhtml_legend=1 00:11:17.071 --rc geninfo_all_blocks=1 00:11:17.071 --rc geninfo_unexecuted_blocks=1 00:11:17.071 00:11:17.071 ' 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:17.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.071 --rc genhtml_branch_coverage=1 00:11:17.071 --rc genhtml_function_coverage=1 00:11:17.071 --rc genhtml_legend=1 00:11:17.071 --rc geninfo_all_blocks=1 00:11:17.071 --rc geninfo_unexecuted_blocks=1 00:11:17.071 00:11:17.071 ' 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:17.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.071 --rc genhtml_branch_coverage=1 00:11:17.071 --rc genhtml_function_coverage=1 00:11:17.071 --rc genhtml_legend=1 00:11:17.071 --rc geninfo_all_blocks=1 00:11:17.071 --rc geninfo_unexecuted_blocks=1 00:11:17.071 00:11:17.071 ' 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:17.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.071 --rc genhtml_branch_coverage=1 00:11:17.071 --rc genhtml_function_coverage=1 00:11:17.071 --rc genhtml_legend=1 00:11:17.071 --rc geninfo_all_blocks=1 00:11:17.071 --rc geninfo_unexecuted_blocks=1 00:11:17.071 00:11:17.071 ' 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.071 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.072 08:47:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.605 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:19.606 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:19.606 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:19.606 Found net devices under 0000:09:00.0: cvl_0_0 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:19.606 Found net devices under 0000:09:00.1: cvl_0_1 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:11:19.606 00:11:19.606 --- 10.0.0.2 ping statistics --- 00:11:19.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.606 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:11:19.606 00:11:19.606 --- 10.0.0.1 ping statistics --- 00:11:19.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.606 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=747784 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 747784 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 747784 ']' 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.606 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 [2024-11-06 08:47:32.530255] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:11:19.607 [2024-11-06 08:47:32.530329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.607 [2024-11-06 08:47:32.599436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.607 [2024-11-06 08:47:32.654846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.607 [2024-11-06 08:47:32.654900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.607 [2024-11-06 08:47:32.654913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.607 [2024-11-06 08:47:32.654924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.607 [2024-11-06 08:47:32.654933] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.607 [2024-11-06 08:47:32.655506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 [2024-11-06 08:47:32.804911] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 [2024-11-06 08:47:32.821139] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 malloc0 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:19.607 { 00:11:19.607 "params": { 00:11:19.607 "name": "Nvme$subsystem", 00:11:19.607 "trtype": "$TEST_TRANSPORT", 00:11:19.607 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:19.607 "adrfam": "ipv4", 00:11:19.607 "trsvcid": "$NVMF_PORT", 00:11:19.607 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:19.607 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:19.607 "hdgst": ${hdgst:-false}, 00:11:19.607 "ddgst": ${ddgst:-false} 00:11:19.607 }, 00:11:19.607 "method": "bdev_nvme_attach_controller" 00:11:19.607 } 00:11:19.607 EOF 00:11:19.607 )") 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:11:19.607 08:47:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:19.607 "params": { 00:11:19.607 "name": "Nvme1", 00:11:19.607 "trtype": "tcp", 00:11:19.607 "traddr": "10.0.0.2", 00:11:19.607 "adrfam": "ipv4", 00:11:19.607 "trsvcid": "4420", 00:11:19.607 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.607 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:19.607 "hdgst": false, 00:11:19.607 "ddgst": false 00:11:19.607 }, 00:11:19.607 "method": "bdev_nvme_attach_controller" 00:11:19.607 }' 00:11:19.866 [2024-11-06 08:47:32.908693] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:11:19.866 [2024-11-06 08:47:32.908781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747805 ] 00:11:19.866 [2024-11-06 08:47:32.980313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.866 [2024-11-06 08:47:33.039585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.123 Running I/O for 10 seconds... 00:11:22.430 5866.00 IOPS, 45.83 MiB/s [2024-11-06T07:47:36.653Z] 5926.50 IOPS, 46.30 MiB/s [2024-11-06T07:47:37.587Z] 5943.00 IOPS, 46.43 MiB/s [2024-11-06T07:47:38.522Z] 5937.75 IOPS, 46.39 MiB/s [2024-11-06T07:47:39.455Z] 5942.40 IOPS, 46.42 MiB/s [2024-11-06T07:47:40.831Z] 5940.17 IOPS, 46.41 MiB/s [2024-11-06T07:47:41.764Z] 5940.71 IOPS, 46.41 MiB/s [2024-11-06T07:47:42.698Z] 5945.25 IOPS, 46.45 MiB/s [2024-11-06T07:47:43.632Z] 5942.89 IOPS, 46.43 MiB/s [2024-11-06T07:47:43.632Z] 5935.90 IOPS, 46.37 MiB/s 00:11:30.343 Latency(us) 00:11:30.343 [2024-11-06T07:47:43.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.343 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:30.343 Verification LBA range: start 0x0 length 0x1000 00:11:30.343 Nvme1n1 : 10.06 5915.59 46.22 0.00 0.00 21500.32 3737.98 45049.93 00:11:30.343 [2024-11-06T07:47:43.632Z] =================================================================================================================== 00:11:30.343 [2024-11-06T07:47:43.632Z] Total : 5915.59 46.22 0.00 0.00 21500.32 3737.98 45049.93 00:11:30.602 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=749124 00:11:30.602 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:30.602 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:30.602 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:30.602 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:30.602 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:11:30.602 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:11:30.602 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:30.602 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:30.602 { 00:11:30.602 "params": { 00:11:30.603 "name": "Nvme$subsystem", 00:11:30.603 "trtype": "$TEST_TRANSPORT", 00:11:30.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:30.603 "adrfam": "ipv4", 00:11:30.603 "trsvcid": "$NVMF_PORT", 00:11:30.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:30.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:30.603 "hdgst": ${hdgst:-false}, 00:11:30.603 "ddgst": ${ddgst:-false} 00:11:30.603 }, 00:11:30.603 "method": "bdev_nvme_attach_controller" 00:11:30.603 } 00:11:30.603 EOF 00:11:30.603 )") 00:11:30.603 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:11:30.603 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:11:30.603 [2024-11-06 08:47:43.698216] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.698253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:11:30.603 08:47:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:30.603 "params": { 00:11:30.603 "name": "Nvme1", 00:11:30.603 "trtype": "tcp", 00:11:30.603 "traddr": "10.0.0.2", 00:11:30.603 "adrfam": "ipv4", 00:11:30.603 "trsvcid": "4420", 00:11:30.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:30.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:30.603 "hdgst": false, 00:11:30.603 "ddgst": false 00:11:30.603 }, 00:11:30.603 "method": "bdev_nvme_attach_controller" 00:11:30.603 }' 00:11:30.603 [2024-11-06 08:47:43.706160] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.706183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.714169] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.714191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.722201] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.722225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.730213] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.730234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.736655] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:11:30.603 [2024-11-06 08:47:43.736723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid749124 ] 00:11:30.603 [2024-11-06 08:47:43.738242] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.738263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.746250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.746271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.754267] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.754288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.762290] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.762310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.770313] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.770333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.778334] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.778354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.786358] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.786379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.794379] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.794400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.802402] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.802421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.803952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.603 [2024-11-06 08:47:43.810434] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.810457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.818470] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.818503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.826467] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.826488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.834487] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.834507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.842507] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.842527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.850530] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.850551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.858549] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.858577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.864426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.603 [2024-11-06 08:47:43.866571] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.866591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.874592] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.874612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.882636] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.882665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.603 [2024-11-06 08:47:43.890664] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.603 [2024-11-06 08:47:43.890697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.861 [2024-11-06 08:47:43.898683] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.898716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.906702] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.906735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.914723] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.914755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.922741] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.922774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.930762] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.930794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.938764] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.938784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.946807] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.946862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.954855] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.954886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.962877] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.962911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.970876] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.970899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.978912] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.978935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.986908] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.986932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:43.994933] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:43.994960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.002967] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.002991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.010988] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.011013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.019008] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.019033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.027031] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.027053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.035053] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.035075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.043075] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.043096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.051098] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.051134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.059138] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.059160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.067159] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.067182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.075183] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.075205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.083216] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.083237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.091245] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.091269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.099257] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.099279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 Running I/O for 5 seconds... 00:11:30.862 [2024-11-06 08:47:44.107278] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.107299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.120577] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.120621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.133505] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.133534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.862 [2024-11-06 08:47:44.143347] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.862 [2024-11-06 08:47:44.143390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.154671] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.154700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.167737] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.167766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.177977] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.178004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.188222] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.188250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.198973] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.199001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.209641] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.209669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.220362] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.220391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.233182] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.233211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.245133] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.245161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.254204] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.254232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.266066] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.266095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.276690] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.276718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.287610] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.287638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.297995] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.298024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.308796] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.308824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.321156] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.321184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.332564] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.332605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.341816] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.341857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.353155] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.353183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.365910] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.365938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.376065] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.376093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.386357] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.386385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.397077] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.397105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.120 [2024-11-06 08:47:44.407684] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.120 [2024-11-06 08:47:44.407713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.378 [2024-11-06 08:47:44.418497] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.378 [2024-11-06 08:47:44.418525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.378 [2024-11-06 08:47:44.430864] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.378 [2024-11-06 08:47:44.430894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.378 [2024-11-06 08:47:44.440367] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.378 [2024-11-06 08:47:44.440410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.378 [2024-11-06 08:47:44.450945] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.378 [2024-11-06 08:47:44.450973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.378 [2024-11-06 08:47:44.461536] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.378 [2024-11-06 08:47:44.461564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.378 [2024-11-06 08:47:44.474388] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.378 [2024-11-06 08:47:44.474417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.378 [2024-11-06 08:47:44.484779] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.378 [2024-11-06 08:47:44.484808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.378 [2024-11-06 08:47:44.495254] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.495290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.506049] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.506078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.516754] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.516783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.530152] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.530181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.540214] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.540242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.550937] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.550966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.563798] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.563827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.574230] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.574258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.584764] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.584792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.595666] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.595707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.606553] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.606581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.617612] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.617640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.628498] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.628526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.641650] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.641678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.652014] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.652043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.379 [2024-11-06 08:47:44.662607] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.379 [2024-11-06 08:47:44.662635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.673190] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.673219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.683817] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.683854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.694416] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.694444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.707323] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.707352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.717795] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.717823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.728498] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.728527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.739005] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.739036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.749642] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.749671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.760155] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.760184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.770801] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.770839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.783434] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.783477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.793451] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.793479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.804057] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.804092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.814937] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.814965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.825365] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.825393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.836018] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.836046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.846673] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.846702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.857201] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.857229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.867731] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.867759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.878256] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.878284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.888883] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.888911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.899440] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.899469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.910144] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.910172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.638 [2024-11-06 08:47:44.922241] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.638 [2024-11-06 08:47:44.922269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.896 [2024-11-06 08:47:44.931642] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.896 [2024-11-06 08:47:44.931670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.896 [2024-11-06 08:47:44.943369] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.896 [2024-11-06 08:47:44.943397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.896 [2024-11-06 08:47:44.956512] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.896 [2024-11-06 08:47:44.956540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.896 [2024-11-06 08:47:44.966844] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.896 [2024-11-06 08:47:44.966881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.896 [2024-11-06 08:47:44.976944] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.896 [2024-11-06 08:47:44.976973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.896 [2024-11-06 08:47:44.987588] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.896 [2024-11-06 08:47:44.987615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.896 [2024-11-06 08:47:44.998271] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.896 [2024-11-06 08:47:44.998299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.896 [2024-11-06 08:47:45.008719] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.896 [2024-11-06 08:47:45.008755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.896 [2024-11-06 08:47:45.019037] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.019065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.029872] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.029904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.042518] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.042547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.052869] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.052907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.063729] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.063758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.074396] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.074425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.085357] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.085385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.096139] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.096167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.106827] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.106865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 11847.00 IOPS, 92.55 MiB/s [2024-11-06T07:47:45.186Z] [2024-11-06 08:47:45.119262] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.119290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.129399] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.129427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.140422] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.140451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.153046] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.153075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.163405] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.163433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.897 [2024-11-06 08:47:45.174020] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.897 [2024-11-06 08:47:45.174048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.186456] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.186485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.196051] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.196080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.206722] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.206750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.217634] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.217661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.230563] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.230591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.241350] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.241379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.252623] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.252652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.263385] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.263413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.273967] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.273995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.284818] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.284856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.295400] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.295428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.306184] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.306212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.318542] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.155 [2024-11-06 08:47:45.318571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.155 [2024-11-06 08:47:45.328971] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.328999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.339686] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.339715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.352310] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.352338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.362523] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.362551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.373031] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.373060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.383728] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.383757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.394325] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.394353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.407178] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.407207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.417323] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.417352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.428102] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.428131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.156 [2024-11-06 08:47:45.438948] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.156 [2024-11-06 08:47:45.438976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.449884] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.449913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.462362] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.462391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.472456] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.472484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.483228] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.483256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.495817] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.495854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.506083] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.506111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.516691] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.516719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.527799] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.527829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.538386] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.538414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.549059] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.549087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.559980] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.560009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.572713] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.572756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.582492] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.582520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.593251] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.593279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.605638] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.605666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.615787] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.615815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.626297] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.626325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.637010] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.637038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.647775] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.647803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.658257] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.658286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.669048] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.669076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.679533] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.679561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.689848] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.689877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.414 [2024-11-06 08:47:45.700612] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.414 [2024-11-06 08:47:45.700640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.713165] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.713196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.723670] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.723713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.734183] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.734211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.745233] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.745262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.756729] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.756758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.767747] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.767776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.778817] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.778856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.791888] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.791917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.802353] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.802381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.813605] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.813634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.826359] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.826388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.836083] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.836120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.846896] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.846924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.857898] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.857926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.869000] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.869028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.879772] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.879810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.890699] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.890727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.901662] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.901690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.912527] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.912555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.923972] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.924000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.934547] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.672 [2024-11-06 08:47:45.934576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.672 [2024-11-06 08:47:45.945307] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.673 [2024-11-06 08:47:45.945337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.673 [2024-11-06 08:47:45.955952] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.673 [2024-11-06 08:47:45.955981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:45.966580] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:45.966607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:45.977192] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:45.977220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:45.987880] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:45.987908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:45.998687] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:45.998715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.009409] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.009438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.022065] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.022093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.031520] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.031548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.042596] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.042632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.053578] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.053607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.064662] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.064689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.075899] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.075927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.086596] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.086625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.097520] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.097548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.108011] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.108052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 11826.50 IOPS, 92.39 MiB/s [2024-11-06T07:47:46.220Z] [2024-11-06 08:47:46.120806] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.120856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.131091] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.131119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.142174] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.142203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.153223] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.153251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.164282] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.164309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.175535] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.175562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.186176] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.186204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.199074] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.199102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.210942] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.210970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.931 [2024-11-06 08:47:46.220181] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.931 [2024-11-06 08:47:46.220208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.189 [2024-11-06 08:47:46.231809] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.189 [2024-11-06 08:47:46.231847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.189 [2024-11-06 08:47:46.244400] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.189 [2024-11-06 08:47:46.244428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.189 [2024-11-06 08:47:46.254231] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.189 [2024-11-06 08:47:46.254267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.189 [2024-11-06 08:47:46.264997] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.189 [2024-11-06 08:47:46.265024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.189 [2024-11-06 08:47:46.275795] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.189 [2024-11-06 08:47:46.275823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.189 [2024-11-06 08:47:46.288123] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.189 [2024-11-06 08:47:46.288151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.189 [2024-11-06 08:47:46.298000] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.189 [2024-11-06 08:47:46.298028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.189 [2024-11-06 08:47:46.308865] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.189 [2024-11-06 08:47:46.308894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.321311] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.321339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.333069] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.333097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.341911] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.341939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.353620] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.353647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.364315] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.364343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.375041] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.375069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.387284] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.387311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.397456] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.397484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.407965] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.407993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.418596] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.418624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.429138] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.429167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.440053] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.440081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.452951] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.452980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.463424] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.463459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.190 [2024-11-06 08:47:46.473964] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.190 [2024-11-06 08:47:46.473992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.484408] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.484437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.495274] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.495302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.505728] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.505757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.516449] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.516477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.527418] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.527447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.540158] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.540201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.550640] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.550668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.561594] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.561622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.574429] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.574459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.584753] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.584781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.595453] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.595482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.606008] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.606038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.616937] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.616966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.629737] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.629765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.639894] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.448 [2024-11-06 08:47:46.639923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.448 [2024-11-06 08:47:46.650441] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.449 [2024-11-06 08:47:46.650471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.449 [2024-11-06 08:47:46.660868] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.449 [2024-11-06 08:47:46.660897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.449 [2024-11-06 08:47:46.671862] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.449 [2024-11-06 08:47:46.671890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.449 [2024-11-06 08:47:46.684415] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.449 [2024-11-06 08:47:46.684444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.449 [2024-11-06 08:47:46.696498] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.449 [2024-11-06 08:47:46.696526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.449 [2024-11-06 08:47:46.705533] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.449 [2024-11-06 08:47:46.705562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.449 [2024-11-06 08:47:46.717257] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.449 [2024-11-06 08:47:46.717285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.449 [2024-11-06 08:47:46.730096] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.449 [2024-11-06 08:47:46.730125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.740627] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.740655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.751584] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.751611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.762390] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.762419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.773288] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.773316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.786127] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.786156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.796377] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.796407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.807383] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.807412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.820313] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.820342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.830076] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.830104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.840961] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.840989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.853709] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.853737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.864092] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.864121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.874680] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.874708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.885370] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.885398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.896215] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.896243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.909005] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.909034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.919154] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.919183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.929513] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.929541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.940155] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.940182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.950705] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.707 [2024-11-06 08:47:46.950733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.707 [2024-11-06 08:47:46.961310] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.708 [2024-11-06 08:47:46.961339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.708 [2024-11-06 08:47:46.972325] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.708 [2024-11-06 08:47:46.972353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.708 [2024-11-06 08:47:46.985044] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.708 [2024-11-06 08:47:46.985072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.708 [2024-11-06 08:47:46.995356] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.708 [2024-11-06 08:47:46.995384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.006025] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.006053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.016720] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.016749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.027045] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.027073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.037687] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.037716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.048331] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.048359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.058807] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.058846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.069827] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.069867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.080461] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.080490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.091401] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.091429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.103972] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.104000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 11831.33 IOPS, 92.43 MiB/s [2024-11-06T07:47:47.255Z] [2024-11-06 08:47:47.113406] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.113434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.124902] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.124929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.135958] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.135986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.146756] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.146784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.158900] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.158927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.168215] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.168241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.180046] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.180074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.190789] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.190816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.201480] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.201507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.214048] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.214075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.226001] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.226029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.235248] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.235276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.966 [2024-11-06 08:47:47.246816] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.966 [2024-11-06 08:47:47.246853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.257429] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.257459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.267503] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.267531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.277903] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.277931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.288683] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.288718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.299585] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.299613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.310241] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.310268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.320905] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.320932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.331391] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.331420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.342285] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.342312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.353051] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.353079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.365430] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.365458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.375218] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.375245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.386144] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.386171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.399272] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.399300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.409473] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.409501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.420141] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.420169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.430971] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.430998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.441753] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.441781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.454303] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.454330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.463859] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.225 [2024-11-06 08:47:47.463886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.225 [2024-11-06 08:47:47.475164] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.226 [2024-11-06 08:47:47.475191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.226 [2024-11-06 08:47:47.487862] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.226 [2024-11-06 08:47:47.487889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.226 [2024-11-06 08:47:47.499737] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.226 [2024-11-06 08:47:47.499772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.226 [2024-11-06 08:47:47.509185] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.226 [2024-11-06 08:47:47.509227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.520616] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.520643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.531460] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.531487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.542232] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.542260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.555308] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.555337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.565334] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.565360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.576172] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.576200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.587018] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.587045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.597820] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.597870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.610012] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.610040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.620459] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.620487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.631327] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.631354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.643537] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.643564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.653769] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.653797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.664558] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.664585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.675236] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.675264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.686283] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.686311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.696714] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.696741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.707363] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.707417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.717974] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.718002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.728612] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.728640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.741445] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.741472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.751815] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.751850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.762294] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.762321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.488 [2024-11-06 08:47:47.772860] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.488 [2024-11-06 08:47:47.772888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.783329] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.783370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.793651] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.793679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.804309] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.804336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.814969] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.814997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.825571] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.825599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.836231] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.836258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.847145] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.847172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.858110] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.858138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.868926] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.868954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.879159] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.879186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.889848] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.889875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.900564] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.900591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.911262] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.911290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.924615] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.924643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.935263] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.935306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.945682] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.945710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.956438] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.956465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.967574] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.967601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.978629] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.978656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:47.991003] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:47.991030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:48.000874] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:48.000902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:48.011842] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:48.011869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:48.025284] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:48.025313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.752 [2024-11-06 08:47:48.037412] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.752 [2024-11-06 08:47:48.037440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.011 [2024-11-06 08:47:48.046707] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.011 [2024-11-06 08:47:48.046735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.011 [2024-11-06 08:47:48.058154] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.011 [2024-11-06 08:47:48.058181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.071856] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.071883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.082803] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.082839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.093616] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.093645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.104735] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.104763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 11837.50 IOPS, 92.48 MiB/s [2024-11-06T07:47:48.301Z] [2024-11-06 08:47:48.115519] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.115547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.128076] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.128114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.140039] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.140067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.149045] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.149073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.160384] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.160411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.170775] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.170802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.182059] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.182087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.194652] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.194695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.205021] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.205048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.215972] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.216000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.228846] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.228883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.239259] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.239287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.250108] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.250135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.262734] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.262776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.272614] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.272642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.283114] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.283141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.012 [2024-11-06 08:47:48.296065] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.012 [2024-11-06 08:47:48.296092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.307966] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.307994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.317048] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.317076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.328813] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.328854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.339524] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.339552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.350439] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.350465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.363467] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.363495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.373599] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.373627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.384458] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.384487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.397791] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.397819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.407521] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.270 [2024-11-06 08:47:48.407548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.270 [2024-11-06 08:47:48.418953] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.418980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.429251] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.429278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.440219] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.440246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.451330] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.451358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.461904] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.461931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.474739] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.474780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.485413] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.485441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.495963] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.495991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.506802] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.506837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.517198] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.517225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.527896] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.527923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.538599] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.538634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.271 [2024-11-06 08:47:48.549583] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.271 [2024-11-06 08:47:48.549611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.560247] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.560274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.572654] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.572682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.582824] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.582860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.593525] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.593552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.604023] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.604050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.614442] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.614482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.625242] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.625269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.638209] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.638249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.647884] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.647912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.658720] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.658747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.669658] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.669686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.682404] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.682431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.692301] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.692329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.703067] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.703094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.714107] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.714134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.727487] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.727514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.739100] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.739127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.748370] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.748406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.759984] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.760011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.773098] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.773125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.783305] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.783332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.794157] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.529 [2024-11-06 08:47:48.794184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.529 [2024-11-06 08:47:48.806767] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.530 [2024-11-06 08:47:48.806794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.530 [2024-11-06 08:47:48.817064] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.530 [2024-11-06 08:47:48.817090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.827691] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.827718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.838408] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.838435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.849240] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.849267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.861751] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.861778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.871662] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.871689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.882452] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.882479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.893112] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.893139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.903809] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.903844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.914203] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.914230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.924862] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.924889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.938776] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.938803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.955046] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.955077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.965199] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.965236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.976103] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.976129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.988525] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.988552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:48.998427] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:48.998455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:49.008948] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:49.008975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:49.019847] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:49.019875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:49.032694] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:49.032721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:49.043261] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:49.043288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:49.053740] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:49.053767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:49.064661] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:49.064689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.788 [2024-11-06 08:47:49.075436] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.788 [2024-11-06 08:47:49.075463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-11-06 08:47:49.085862] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-11-06 08:47:49.085889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-11-06 08:47:49.096618] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-11-06 08:47:49.096646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-11-06 08:47:49.109243] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-11-06 08:47:49.109270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 11838.40 IOPS, 92.49 MiB/s [2024-11-06T07:47:49.336Z] [2024-11-06 08:47:49.121107] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-11-06 08:47:49.121134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-11-06 08:47:49.127628] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-11-06 08:47:49.127651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 00:11:36.047 Latency(us) 00:11:36.047 [2024-11-06T07:47:49.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:36.047 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:36.047 Nvme1n1 : 5.01 11841.31 92.51 0.00 0.00 10796.05 4611.79 20388.98 00:11:36.047 [2024-11-06T07:47:49.336Z] =================================================================================================================== 00:11:36.047 [2024-11-06T07:47:49.336Z] Total : 11841.31 92.51 0.00 0.00 10796.05 4611.79 20388.98 00:11:36.047 [2024-11-06 08:47:49.135654] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-11-06 08:47:49.135676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-11-06 08:47:49.143673] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.047 [2024-11-06 08:47:49.143696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.047 [2024-11-06 08:47:49.151717] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.151746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.159770] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.159813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.167788] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.167840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.175807] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.175856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.183840] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.183879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.191852] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.191891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.199885] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.199927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.207904] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.207945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.215923] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.215963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.223946] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.223987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.231966] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.232008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.239999] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.240041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.248007] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.248048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.256024] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.256063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.264047] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.264088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.272069] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.272120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.280050] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.280074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.288069] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.288098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.296090] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.296111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.304125] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.304146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.312155] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.312184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.320202] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.320243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.328224] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.328266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.048 [2024-11-06 08:47:49.336220] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.048 [2024-11-06 08:47:49.336239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.306 [2024-11-06 08:47:49.344239] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.306 [2024-11-06 08:47:49.344258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.306 [2024-11-06 08:47:49.352259] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:36.306 [2024-11-06 08:47:49.352279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:36.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (749124) - No such process 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 749124 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:36.306 delay0 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.306 08:47:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:36.307 [2024-11-06 08:47:49.437311] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:42.865 [2024-11-06 08:47:55.632024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa11540 is same with the state(6) to be set 00:11:42.865 Initializing NVMe Controllers 00:11:42.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:42.865 Initialization complete. Launching workers. 00:11:42.865 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 118 00:11:42.865 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 405, failed to submit 33 00:11:42.865 success 235, unsuccessful 170, failed 0 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.865 rmmod nvme_tcp 00:11:42.865 rmmod nvme_fabrics 00:11:42.865 rmmod nvme_keyring 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 747784 ']' 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 747784 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 747784 ']' 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 747784 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 747784 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 747784' 00:11:42.865 killing process with pid 747784 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 747784 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 747784 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.865 08:47:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.774 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.774 00:11:44.774 real 0m27.991s 00:11:44.774 user 0m41.383s 00:11:44.774 sys 0m8.067s 00:11:44.774 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.774 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.774 ************************************ 00:11:44.774 END TEST nvmf_zcopy 00:11:44.774 ************************************ 00:11:44.774 08:47:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:44.774 08:47:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:44.774 08:47:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.774 08:47:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.036 ************************************ 00:11:45.036 START TEST nvmf_nmic 00:11:45.036 ************************************ 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:45.036 * Looking for test storage... 00:11:45.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # lcov --version 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.036 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.037 --rc genhtml_branch_coverage=1 00:11:45.037 --rc genhtml_function_coverage=1 00:11:45.037 --rc genhtml_legend=1 00:11:45.037 --rc geninfo_all_blocks=1 00:11:45.037 --rc geninfo_unexecuted_blocks=1 00:11:45.037 00:11:45.037 ' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.037 --rc genhtml_branch_coverage=1 00:11:45.037 --rc genhtml_function_coverage=1 00:11:45.037 --rc genhtml_legend=1 00:11:45.037 --rc geninfo_all_blocks=1 00:11:45.037 --rc geninfo_unexecuted_blocks=1 00:11:45.037 00:11:45.037 ' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.037 --rc genhtml_branch_coverage=1 00:11:45.037 --rc genhtml_function_coverage=1 00:11:45.037 --rc genhtml_legend=1 00:11:45.037 --rc geninfo_all_blocks=1 00:11:45.037 --rc geninfo_unexecuted_blocks=1 00:11:45.037 00:11:45.037 ' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:45.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.037 --rc genhtml_branch_coverage=1 00:11:45.037 --rc genhtml_function_coverage=1 00:11:45.037 --rc genhtml_legend=1 00:11:45.037 --rc geninfo_all_blocks=1 00:11:45.037 --rc geninfo_unexecuted_blocks=1 00:11:45.037 00:11:45.037 ' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.037 08:47:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.572 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:47.573 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:47.573 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:47.573 Found net devices under 0000:09:00.0: cvl_0_0 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:47.573 Found net devices under 0000:09:00.1: cvl_0_1 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:11:47.573 00:11:47.573 --- 10.0.0.2 ping statistics --- 00:11:47.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.573 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:11:47.573 00:11:47.573 --- 10.0.0.1 ping statistics --- 00:11:47.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.573 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:47.573 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=752458 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 752458 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 752458 ']' 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.574 [2024-11-06 08:48:00.574178] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:11:47.574 [2024-11-06 08:48:00.574275] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.574 [2024-11-06 08:48:00.646762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.574 [2024-11-06 08:48:00.708093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.574 [2024-11-06 08:48:00.708159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.574 [2024-11-06 08:48:00.708187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.574 [2024-11-06 08:48:00.708198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.574 [2024-11-06 08:48:00.708208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.574 [2024-11-06 08:48:00.709902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.574 [2024-11-06 08:48:00.709958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.574 [2024-11-06 08:48:00.709961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.574 [2024-11-06 08:48:00.709930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.574 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.574 [2024-11-06 08:48:00.860435] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 Malloc0 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 [2024-11-06 08:48:00.926495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:47.833 test case1: single bdev can't be used in multiple subsystems 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 [2024-11-06 08:48:00.950336] bdev.c:8456:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:47.833 [2024-11-06 08:48:00.950367] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:47.833 [2024-11-06 08:48:00.950397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.833 request: 00:11:47.833 { 00:11:47.833 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:47.833 "namespace": { 00:11:47.833 "bdev_name": "Malloc0", 00:11:47.833 "no_auto_visible": false, 00:11:47.833 "no_metadata": false 00:11:47.833 }, 00:11:47.833 "method": "nvmf_subsystem_add_ns", 00:11:47.833 "req_id": 1 00:11:47.833 } 00:11:47.833 Got JSON-RPC error response 00:11:47.833 response: 00:11:47.833 { 00:11:47.833 "code": -32602, 00:11:47.833 "message": "Invalid parameters" 00:11:47.833 } 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:47.833 Adding namespace failed - expected result. 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:47.833 test case2: host connect to nvmf target in multiple paths 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.833 [2024-11-06 08:48:00.958464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.833 08:48:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.398 08:48:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:48.964 08:48:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.964 08:48:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.964 08:48:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.964 08:48:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:48.964 08:48:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:51.490 08:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:51.490 08:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:51.490 08:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.490 08:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:51.490 08:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.490 08:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:51.490 08:48:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:51.490 [global] 00:11:51.490 thread=1 00:11:51.490 invalidate=1 00:11:51.490 rw=write 00:11:51.490 time_based=1 00:11:51.490 runtime=1 00:11:51.490 ioengine=libaio 00:11:51.490 direct=1 00:11:51.490 bs=4096 00:11:51.490 iodepth=1 00:11:51.490 norandommap=0 00:11:51.490 numjobs=1 00:11:51.490 00:11:51.491 verify_dump=1 00:11:51.491 verify_backlog=512 00:11:51.491 verify_state_save=0 00:11:51.491 do_verify=1 00:11:51.491 verify=crc32c-intel 00:11:51.491 [job0] 00:11:51.491 filename=/dev/nvme0n1 00:11:51.491 Could not set queue depth (nvme0n1) 00:11:51.491 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.491 fio-3.35 00:11:51.491 Starting 1 thread 00:11:52.423 00:11:52.423 job0: (groupid=0, jobs=1): err= 0: pid=753168: Wed Nov 6 08:48:05 2024 00:11:52.423 read: IOPS=2421, BW=9686KiB/s (9919kB/s)(9696KiB/1001msec) 00:11:52.423 slat (nsec): min=4293, max=20414, avg=5924.63, stdev=1935.04 00:11:52.423 clat (usec): min=183, max=535, avg=228.64, stdev=28.85 00:11:52.423 lat (usec): min=188, max=549, avg=234.57, stdev=29.30 00:11:52.423 clat percentiles (usec): 00:11:52.423 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:11:52.423 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 229], 00:11:52.423 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 269], 00:11:52.423 | 99.00th=[ 318], 99.50th=[ 355], 99.90th=[ 523], 99.95th=[ 529], 00:11:52.423 | 99.99th=[ 537] 00:11:52.423 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:52.423 slat (nsec): min=5736, max=51055, avg=7392.16, stdev=1769.37 00:11:52.423 clat (usec): min=123, max=389, avg=157.49, stdev=23.49 00:11:52.423 lat (usec): min=129, max=440, avg=164.88, stdev=23.85 00:11:52.423 clat percentiles (usec): 00:11:52.423 | 1.00th=[ 131], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 143], 00:11:52.423 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 151], 60.00th=[ 155], 00:11:52.423 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 182], 95.00th=[ 221], 00:11:52.424 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 273], 99.95th=[ 310], 00:11:52.424 | 99.99th=[ 392] 00:11:52.424 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:52.424 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:52.424 lat (usec) : 250=92.98%, 500=6.92%, 750=0.10% 00:11:52.424 cpu : usr=1.70%, sys=3.30%, ctx=4984, majf=0, minf=1 00:11:52.424 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.424 issued rwts: total=2424,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.424 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.424 00:11:52.424 Run status group 0 (all jobs): 00:11:52.424 READ: bw=9686KiB/s (9919kB/s), 9686KiB/s-9686KiB/s (9919kB/s-9919kB/s), io=9696KiB (9929kB), run=1001-1001msec 00:11:52.424 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:11:52.424 00:11:52.424 Disk stats (read/write): 00:11:52.424 nvme0n1: ios=2098/2486, merge=0/0, ticks=688/390, in_queue=1078, util=95.49% 00:11:52.424 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.681 rmmod nvme_tcp 00:11:52.681 rmmod nvme_fabrics 00:11:52.681 rmmod nvme_keyring 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 752458 ']' 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 752458 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 752458 ']' 00:11:52.681 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 752458 00:11:52.682 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:52.682 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.682 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 752458 00:11:52.682 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.682 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.682 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 752458' 00:11:52.682 killing process with pid 752458 00:11:52.682 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 752458 00:11:52.682 08:48:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 752458 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.940 08:48:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.940 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:54.940 00:11:54.940 real 0m10.100s 00:11:54.940 user 0m22.482s 00:11:54.940 sys 0m2.628s 00:11:54.940 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.940 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:54.940 ************************************ 00:11:54.940 END TEST nvmf_nmic 00:11:54.940 ************************************ 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:55.225 ************************************ 00:11:55.225 START TEST nvmf_fio_target 00:11:55.225 ************************************ 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:55.225 * Looking for test storage... 00:11:55.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lcov --version 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:55.225 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:11:55.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.226 --rc genhtml_branch_coverage=1 00:11:55.226 --rc genhtml_function_coverage=1 00:11:55.226 --rc genhtml_legend=1 00:11:55.226 --rc geninfo_all_blocks=1 00:11:55.226 --rc geninfo_unexecuted_blocks=1 00:11:55.226 00:11:55.226 ' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:11:55.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.226 --rc genhtml_branch_coverage=1 00:11:55.226 --rc genhtml_function_coverage=1 00:11:55.226 --rc genhtml_legend=1 00:11:55.226 --rc geninfo_all_blocks=1 00:11:55.226 --rc geninfo_unexecuted_blocks=1 00:11:55.226 00:11:55.226 ' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:11:55.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.226 --rc genhtml_branch_coverage=1 00:11:55.226 --rc genhtml_function_coverage=1 00:11:55.226 --rc genhtml_legend=1 00:11:55.226 --rc geninfo_all_blocks=1 00:11:55.226 --rc geninfo_unexecuted_blocks=1 00:11:55.226 00:11:55.226 ' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:11:55.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.226 --rc genhtml_branch_coverage=1 00:11:55.226 --rc genhtml_function_coverage=1 00:11:55.226 --rc genhtml_legend=1 00:11:55.226 --rc geninfo_all_blocks=1 00:11:55.226 --rc geninfo_unexecuted_blocks=1 00:11:55.226 00:11:55.226 ' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.226 08:48:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:57.757 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:57.757 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:57.757 Found net devices under 0000:09:00.0: cvl_0_0 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:57.757 Found net devices under 0000:09:00.1: cvl_0_1 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.757 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:11:57.758 00:11:57.758 --- 10.0.0.2 ping statistics --- 00:11:57.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.758 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:11:57.758 00:11:57.758 --- 10.0.0.1 ping statistics --- 00:11:57.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.758 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=755773 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 755773 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 755773 ']' 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.758 [2024-11-06 08:48:10.698346] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:11:57.758 [2024-11-06 08:48:10.698442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.758 [2024-11-06 08:48:10.777663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.758 [2024-11-06 08:48:10.841164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.758 [2024-11-06 08:48:10.841221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.758 [2024-11-06 08:48:10.841236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.758 [2024-11-06 08:48:10.841248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.758 [2024-11-06 08:48:10.841258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.758 [2024-11-06 08:48:10.842847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.758 [2024-11-06 08:48:10.842893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.758 [2024-11-06 08:48:10.842958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.758 [2024-11-06 08:48:10.842962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:57.758 08:48:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.758 08:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.758 08:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:58.016 [2024-11-06 08:48:11.257612] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.016 08:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.582 08:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:58.582 08:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.582 08:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:58.582 08:48:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.147 08:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:59.148 08:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.405 08:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:59.405 08:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:59.663 08:48:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.920 08:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:59.920 08:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:00.178 08:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:00.178 08:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:00.436 08:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:00.436 08:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:00.693 08:48:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.951 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:00.951 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.209 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:01.209 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.466 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.724 [2024-11-06 08:48:14.879080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.724 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:01.982 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:02.239 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.805 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:02.805 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:02.805 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.805 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:02.805 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:02.805 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:05.332 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:05.332 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:05.332 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.332 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:05.332 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.332 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:05.332 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:05.332 [global] 00:12:05.332 thread=1 00:12:05.332 invalidate=1 00:12:05.332 rw=write 00:12:05.332 time_based=1 00:12:05.332 runtime=1 00:12:05.332 ioengine=libaio 00:12:05.332 direct=1 00:12:05.332 bs=4096 00:12:05.332 iodepth=1 00:12:05.332 norandommap=0 00:12:05.332 numjobs=1 00:12:05.332 00:12:05.332 verify_dump=1 00:12:05.332 verify_backlog=512 00:12:05.332 verify_state_save=0 00:12:05.332 do_verify=1 00:12:05.332 verify=crc32c-intel 00:12:05.332 [job0] 00:12:05.332 filename=/dev/nvme0n1 00:12:05.332 [job1] 00:12:05.332 filename=/dev/nvme0n2 00:12:05.332 [job2] 00:12:05.332 filename=/dev/nvme0n3 00:12:05.332 [job3] 00:12:05.332 filename=/dev/nvme0n4 00:12:05.332 Could not set queue depth (nvme0n1) 00:12:05.332 Could not set queue depth (nvme0n2) 00:12:05.332 Could not set queue depth (nvme0n3) 00:12:05.332 Could not set queue depth (nvme0n4) 00:12:05.332 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.332 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.332 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.332 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.332 fio-3.35 00:12:05.332 Starting 4 threads 00:12:06.265 00:12:06.266 job0: (groupid=0, jobs=1): err= 0: pid=756845: Wed Nov 6 08:48:19 2024 00:12:06.266 read: IOPS=123, BW=495KiB/s (507kB/s)(504KiB/1018msec) 00:12:06.266 slat (nsec): min=5630, max=33822, avg=14096.43, stdev=8505.70 00:12:06.266 clat (usec): min=212, max=42072, avg=7138.55, stdev=15445.03 00:12:06.266 lat (usec): min=218, max=42081, avg=7152.64, stdev=15451.35 00:12:06.266 clat percentiles (usec): 00:12:06.266 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 247], 00:12:06.266 | 30.00th=[ 255], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:12:06.266 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[41157], 95.00th=[42206], 00:12:06.266 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:06.266 | 99.99th=[42206] 00:12:06.266 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:12:06.266 slat (nsec): min=7416, max=30613, avg=8943.27, stdev=2814.40 00:12:06.266 clat (usec): min=164, max=1387, avg=215.47, stdev=81.51 00:12:06.266 lat (usec): min=173, max=1397, avg=224.42, stdev=81.99 00:12:06.266 clat percentiles (usec): 00:12:06.266 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:12:06.266 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:12:06.266 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 243], 95.00th=[ 281], 00:12:06.266 | 99.00th=[ 412], 99.50th=[ 922], 99.90th=[ 1385], 99.95th=[ 1385], 00:12:06.266 | 99.99th=[ 1385] 00:12:06.266 bw ( KiB/s): min= 4096, max= 4096, per=29.03%, avg=4096.00, stdev= 0.00, samples=1 00:12:06.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:06.266 lat (usec) : 250=80.25%, 500=15.83%, 750=0.16%, 1000=0.31% 00:12:06.266 lat (msec) : 2=0.16%, 50=3.29% 00:12:06.266 cpu : usr=0.20%, sys=0.98%, ctx=638, majf=0, minf=1 00:12:06.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.266 issued rwts: total=126,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.266 job1: (groupid=0, jobs=1): err= 0: pid=756847: Wed Nov 6 08:48:19 2024 00:12:06.266 read: IOPS=1540, BW=6162KiB/s (6310kB/s)(6168KiB/1001msec) 00:12:06.266 slat (nsec): min=4000, max=54413, avg=14530.19, stdev=8045.85 00:12:06.266 clat (usec): min=180, max=40956, avg=392.88, stdev=2357.38 00:12:06.266 lat (usec): min=185, max=40990, avg=407.41, stdev=2358.40 00:12:06.266 clat percentiles (usec): 00:12:06.266 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:12:06.266 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 235], 00:12:06.266 | 70.00th=[ 253], 80.00th=[ 273], 90.00th=[ 306], 95.00th=[ 457], 00:12:06.266 | 99.00th=[ 537], 99.50th=[ 603], 99.90th=[41157], 99.95th=[41157], 00:12:06.266 | 99.99th=[41157] 00:12:06.266 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:06.266 slat (nsec): min=5934, max=62583, avg=13320.77, stdev=5811.91 00:12:06.266 clat (usec): min=121, max=393, avg=161.79, stdev=36.85 00:12:06.266 lat (usec): min=130, max=401, avg=175.11, stdev=36.94 00:12:06.266 clat percentiles (usec): 00:12:06.266 | 1.00th=[ 125], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:12:06.266 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 155], 00:12:06.266 | 70.00th=[ 167], 80.00th=[ 186], 90.00th=[ 215], 95.00th=[ 241], 00:12:06.266 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 379], 99.95th=[ 392], 00:12:06.266 | 99.99th=[ 396] 00:12:06.266 bw ( KiB/s): min= 9640, max= 9640, per=68.32%, avg=9640.00, stdev= 0.00, samples=1 00:12:06.266 iops : min= 2410, max= 2410, avg=2410.00, stdev= 0.00, samples=1 00:12:06.266 lat (usec) : 250=84.62%, 500=14.71%, 750=0.47% 00:12:06.266 lat (msec) : 10=0.03%, 20=0.03%, 50=0.14% 00:12:06.266 cpu : usr=2.20%, sys=5.90%, ctx=3591, majf=0, minf=1 00:12:06.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.266 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.266 job2: (groupid=0, jobs=1): err= 0: pid=756852: Wed Nov 6 08:48:19 2024 00:12:06.266 read: IOPS=22, BW=88.9KiB/s (91.0kB/s)(92.0KiB/1035msec) 00:12:06.266 slat (nsec): min=17567, max=40307, avg=31317.13, stdev=8714.16 00:12:06.266 clat (usec): min=269, max=41331, avg=39204.16, stdev=8487.89 00:12:06.266 lat (usec): min=289, max=41350, avg=39235.48, stdev=8490.40 00:12:06.266 clat percentiles (usec): 00:12:06.266 | 1.00th=[ 269], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:12:06.266 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:06.266 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:06.266 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:06.266 | 99.99th=[41157] 00:12:06.266 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:12:06.266 slat (usec): min=8, max=21492, avg=51.92, stdev=949.40 00:12:06.266 clat (usec): min=153, max=341, avg=203.98, stdev=25.41 00:12:06.266 lat (usec): min=162, max=21759, avg=255.90, stdev=952.55 00:12:06.266 clat percentiles (usec): 00:12:06.266 | 1.00th=[ 161], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:12:06.266 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 206], 00:12:06.266 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 258], 00:12:06.266 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 343], 99.95th=[ 343], 00:12:06.266 | 99.99th=[ 343] 00:12:06.266 bw ( KiB/s): min= 4096, max= 4096, per=29.03%, avg=4096.00, stdev= 0.00, samples=1 00:12:06.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:06.266 lat (usec) : 250=89.53%, 500=6.36% 00:12:06.266 lat (msec) : 50=4.11% 00:12:06.266 cpu : usr=0.58%, sys=0.39%, ctx=537, majf=0, minf=1 00:12:06.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.266 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.266 job3: (groupid=0, jobs=1): err= 0: pid=756856: Wed Nov 6 08:48:19 2024 00:12:06.266 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:12:06.266 slat (nsec): min=4758, max=52390, avg=16936.76, stdev=6947.24 00:12:06.266 clat (usec): min=189, max=41376, avg=1715.32, stdev=7512.04 00:12:06.266 lat (usec): min=194, max=41395, avg=1732.26, stdev=7514.36 00:12:06.266 clat percentiles (usec): 00:12:06.266 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 231], 00:12:06.266 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:12:06.266 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 449], 95.00th=[ 523], 00:12:06.266 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:06.266 | 99.99th=[41157] 00:12:06.266 write: IOPS=578, BW=2314KiB/s (2369kB/s)(2316KiB/1001msec); 0 zone resets 00:12:06.266 slat (nsec): min=6264, max=47216, avg=9825.33, stdev=6053.03 00:12:06.266 clat (usec): min=138, max=871, avg=177.99, stdev=40.36 00:12:06.266 lat (usec): min=145, max=896, avg=187.81, stdev=42.33 00:12:06.266 clat percentiles (usec): 00:12:06.266 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:12:06.266 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:12:06.266 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 217], 95.00th=[ 235], 00:12:06.266 | 99.00th=[ 269], 99.50th=[ 379], 99.90th=[ 873], 99.95th=[ 873], 00:12:06.266 | 99.99th=[ 873] 00:12:06.266 bw ( KiB/s): min= 4096, max= 4096, per=29.03%, avg=4096.00, stdev= 0.00, samples=1 00:12:06.266 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:06.266 lat (usec) : 250=85.06%, 500=11.55%, 750=1.56%, 1000=0.09% 00:12:06.266 lat (msec) : 10=0.09%, 50=1.65% 00:12:06.266 cpu : usr=1.00%, sys=1.80%, ctx=1092, majf=0, minf=1 00:12:06.266 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.266 issued rwts: total=512,579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.266 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.266 00:12:06.266 Run status group 0 (all jobs): 00:12:06.266 READ: bw=8514KiB/s (8718kB/s), 88.9KiB/s-6162KiB/s (91.0kB/s-6310kB/s), io=8812KiB (9023kB), run=1001-1035msec 00:12:06.266 WRITE: bw=13.8MiB/s (14.4MB/s), 1979KiB/s-8184KiB/s (2026kB/s-8380kB/s), io=14.3MiB (15.0MB), run=1001-1035msec 00:12:06.266 00:12:06.266 Disk stats (read/write): 00:12:06.266 nvme0n1: ios=171/512, merge=0/0, ticks=713/109, in_queue=822, util=86.27% 00:12:06.266 nvme0n2: ios=1587/2048, merge=0/0, ticks=1331/314, in_queue=1645, util=97.86% 00:12:06.266 nvme0n3: ios=77/512, merge=0/0, ticks=1228/101, in_queue=1329, util=98.22% 00:12:06.266 nvme0n4: ios=206/512, merge=0/0, ticks=913/82, in_queue=995, util=97.78% 00:12:06.266 08:48:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:06.266 [global] 00:12:06.266 thread=1 00:12:06.266 invalidate=1 00:12:06.266 rw=randwrite 00:12:06.266 time_based=1 00:12:06.266 runtime=1 00:12:06.266 ioengine=libaio 00:12:06.266 direct=1 00:12:06.266 bs=4096 00:12:06.266 iodepth=1 00:12:06.266 norandommap=0 00:12:06.266 numjobs=1 00:12:06.266 00:12:06.266 verify_dump=1 00:12:06.266 verify_backlog=512 00:12:06.266 verify_state_save=0 00:12:06.266 do_verify=1 00:12:06.266 verify=crc32c-intel 00:12:06.266 [job0] 00:12:06.266 filename=/dev/nvme0n1 00:12:06.266 [job1] 00:12:06.266 filename=/dev/nvme0n2 00:12:06.266 [job2] 00:12:06.266 filename=/dev/nvme0n3 00:12:06.524 [job3] 00:12:06.524 filename=/dev/nvme0n4 00:12:06.524 Could not set queue depth (nvme0n1) 00:12:06.524 Could not set queue depth (nvme0n2) 00:12:06.524 Could not set queue depth (nvme0n3) 00:12:06.524 Could not set queue depth (nvme0n4) 00:12:06.524 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.524 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.524 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.524 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.524 fio-3.35 00:12:06.524 Starting 4 threads 00:12:07.896 00:12:07.896 job0: (groupid=0, jobs=1): err= 0: pid=757083: Wed Nov 6 08:48:20 2024 00:12:07.896 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:07.896 slat (nsec): min=6440, max=75163, avg=14510.23, stdev=5325.55 00:12:07.896 clat (usec): min=179, max=741, avg=246.62, stdev=53.40 00:12:07.896 lat (usec): min=187, max=750, avg=261.13, stdev=54.55 00:12:07.896 clat percentiles (usec): 00:12:07.896 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 221], 00:12:07.896 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:12:07.896 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 334], 00:12:07.896 | 99.00th=[ 519], 99.50th=[ 545], 99.90th=[ 603], 99.95th=[ 627], 00:12:07.896 | 99.99th=[ 742] 00:12:07.896 write: IOPS=2058, BW=8236KiB/s (8433kB/s)(8244KiB/1001msec); 0 zone resets 00:12:07.896 slat (nsec): min=6683, max=58898, avg=18284.08, stdev=6881.06 00:12:07.896 clat (usec): min=137, max=1287, avg=197.70, stdev=35.74 00:12:07.896 lat (usec): min=147, max=1297, avg=215.98, stdev=35.92 00:12:07.896 clat percentiles (usec): 00:12:07.896 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 167], 20.00th=[ 180], 00:12:07.896 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:12:07.896 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 243], 00:12:07.896 | 99.00th=[ 277], 99.50th=[ 310], 99.90th=[ 392], 99.95th=[ 486], 00:12:07.896 | 99.99th=[ 1287] 00:12:07.896 bw ( KiB/s): min= 8192, max= 8192, per=31.72%, avg=8192.00, stdev= 0.00, samples=1 00:12:07.896 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:07.896 lat (usec) : 250=85.79%, 500=13.58%, 750=0.61% 00:12:07.896 lat (msec) : 2=0.02% 00:12:07.896 cpu : usr=6.00%, sys=8.10%, ctx=4113, majf=0, minf=1 00:12:07.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.896 issued rwts: total=2048,2061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.896 job1: (groupid=0, jobs=1): err= 0: pid=757084: Wed Nov 6 08:48:20 2024 00:12:07.896 read: IOPS=1597, BW=6390KiB/s (6543kB/s)(6396KiB/1001msec) 00:12:07.896 slat (nsec): min=5911, max=47346, avg=13408.54, stdev=4779.44 00:12:07.896 clat (usec): min=215, max=566, avg=312.94, stdev=50.08 00:12:07.896 lat (usec): min=223, max=583, avg=326.35, stdev=50.29 00:12:07.896 clat percentiles (usec): 00:12:07.896 | 1.00th=[ 231], 5.00th=[ 247], 10.00th=[ 258], 20.00th=[ 269], 00:12:07.896 | 30.00th=[ 281], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 322], 00:12:07.896 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 388], 95.00th=[ 416], 00:12:07.896 | 99.00th=[ 449], 99.50th=[ 474], 99.90th=[ 529], 99.95th=[ 570], 00:12:07.896 | 99.99th=[ 570] 00:12:07.896 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:07.896 slat (nsec): min=7155, max=73164, avg=16918.31, stdev=6216.32 00:12:07.896 clat (usec): min=144, max=1261, avg=208.13, stdev=47.61 00:12:07.896 lat (usec): min=152, max=1273, avg=225.05, stdev=48.25 00:12:07.896 clat percentiles (usec): 00:12:07.896 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 184], 00:12:07.896 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:12:07.896 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 251], 95.00th=[ 273], 00:12:07.896 | 99.00th=[ 338], 99.50th=[ 396], 99.90th=[ 734], 99.95th=[ 971], 00:12:07.896 | 99.99th=[ 1270] 00:12:07.896 bw ( KiB/s): min= 8192, max= 8192, per=31.72%, avg=8192.00, stdev= 0.00, samples=1 00:12:07.896 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:07.896 lat (usec) : 250=53.06%, 500=46.78%, 750=0.11%, 1000=0.03% 00:12:07.896 lat (msec) : 2=0.03% 00:12:07.896 cpu : usr=4.10%, sys=8.10%, ctx=3649, majf=0, minf=1 00:12:07.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.896 issued rwts: total=1599,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.896 job2: (groupid=0, jobs=1): err= 0: pid=757085: Wed Nov 6 08:48:20 2024 00:12:07.896 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:07.896 slat (nsec): min=5726, max=57431, avg=12969.64, stdev=6244.18 00:12:07.896 clat (usec): min=184, max=41150, avg=681.13, stdev=4200.86 00:12:07.896 lat (usec): min=191, max=41163, avg=694.10, stdev=4201.38 00:12:07.896 clat percentiles (usec): 00:12:07.896 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:12:07.896 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:12:07.896 | 70.00th=[ 233], 80.00th=[ 285], 90.00th=[ 338], 95.00th=[ 416], 00:12:07.896 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:07.896 | 99.99th=[41157] 00:12:07.896 write: IOPS=1419, BW=5678KiB/s (5815kB/s)(5684KiB/1001msec); 0 zone resets 00:12:07.896 slat (nsec): min=6179, max=40666, avg=12145.54, stdev=4881.70 00:12:07.896 clat (usec): min=137, max=1372, avg=185.52, stdev=48.80 00:12:07.896 lat (usec): min=144, max=1386, avg=197.66, stdev=48.70 00:12:07.896 clat percentiles (usec): 00:12:07.896 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:12:07.896 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 180], 00:12:07.896 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 247], 00:12:07.896 | 99.00th=[ 281], 99.50th=[ 322], 99.90th=[ 529], 99.95th=[ 1369], 00:12:07.896 | 99.99th=[ 1369] 00:12:07.896 bw ( KiB/s): min= 8192, max= 8192, per=31.72%, avg=8192.00, stdev= 0.00, samples=1 00:12:07.896 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:07.896 lat (usec) : 250=86.79%, 500=12.47%, 750=0.25% 00:12:07.896 lat (msec) : 2=0.04%, 50=0.45% 00:12:07.897 cpu : usr=1.60%, sys=3.10%, ctx=2447, majf=0, minf=1 00:12:07.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.897 issued rwts: total=1024,1421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.897 job3: (groupid=0, jobs=1): err= 0: pid=757086: Wed Nov 6 08:48:20 2024 00:12:07.897 read: IOPS=922, BW=3689KiB/s (3777kB/s)(3744KiB/1015msec) 00:12:07.897 slat (nsec): min=7632, max=39286, avg=16458.69, stdev=5507.77 00:12:07.897 clat (usec): min=220, max=41495, avg=820.12, stdev=4389.51 00:12:07.897 lat (usec): min=228, max=41512, avg=836.58, stdev=4389.72 00:12:07.897 clat percentiles (usec): 00:12:07.897 | 1.00th=[ 265], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 310], 00:12:07.897 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 338], 00:12:07.897 | 70.00th=[ 347], 80.00th=[ 375], 90.00th=[ 416], 95.00th=[ 433], 00:12:07.897 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:12:07.897 | 99.99th=[41681] 00:12:07.897 write: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec); 0 zone resets 00:12:07.897 slat (nsec): min=6468, max=60191, avg=16940.25, stdev=7266.26 00:12:07.897 clat (usec): min=152, max=620, avg=200.20, stdev=32.88 00:12:07.897 lat (usec): min=162, max=632, avg=217.14, stdev=33.91 00:12:07.897 clat percentiles (usec): 00:12:07.897 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 184], 00:12:07.897 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:12:07.897 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 237], 00:12:07.897 | 99.00th=[ 330], 99.50th=[ 355], 99.90th=[ 474], 99.95th=[ 619], 00:12:07.897 | 99.99th=[ 619] 00:12:07.897 bw ( KiB/s): min= 4096, max= 4096, per=15.86%, avg=4096.00, stdev= 0.00, samples=2 00:12:07.897 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:12:07.897 lat (usec) : 250=50.36%, 500=48.98%, 750=0.10% 00:12:07.897 lat (msec) : 50=0.56% 00:12:07.897 cpu : usr=2.17%, sys=4.24%, ctx=1961, majf=0, minf=1 00:12:07.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.897 issued rwts: total=936,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.897 00:12:07.897 Run status group 0 (all jobs): 00:12:07.897 READ: bw=21.6MiB/s (22.6MB/s), 3689KiB/s-8184KiB/s (3777kB/s-8380kB/s), io=21.9MiB (23.0MB), run=1001-1015msec 00:12:07.897 WRITE: bw=25.2MiB/s (26.4MB/s), 4035KiB/s-8236KiB/s (4132kB/s-8433kB/s), io=25.6MiB (26.8MB), run=1001-1015msec 00:12:07.897 00:12:07.897 Disk stats (read/write): 00:12:07.897 nvme0n1: ios=1566/1971, merge=0/0, ticks=636/359, in_queue=995, util=98.00% 00:12:07.897 nvme0n2: ios=1439/1536, merge=0/0, ticks=442/314, in_queue=756, util=86.80% 00:12:07.897 nvme0n3: ios=896/1024, merge=0/0, ticks=1458/186, in_queue=1644, util=98.75% 00:12:07.897 nvme0n4: ios=885/1024, merge=0/0, ticks=757/197, in_queue=954, util=99.69% 00:12:07.897 08:48:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:07.897 [global] 00:12:07.897 thread=1 00:12:07.897 invalidate=1 00:12:07.897 rw=write 00:12:07.897 time_based=1 00:12:07.897 runtime=1 00:12:07.897 ioengine=libaio 00:12:07.897 direct=1 00:12:07.897 bs=4096 00:12:07.897 iodepth=128 00:12:07.897 norandommap=0 00:12:07.897 numjobs=1 00:12:07.897 00:12:07.897 verify_dump=1 00:12:07.897 verify_backlog=512 00:12:07.897 verify_state_save=0 00:12:07.897 do_verify=1 00:12:07.897 verify=crc32c-intel 00:12:07.897 [job0] 00:12:07.897 filename=/dev/nvme0n1 00:12:07.897 [job1] 00:12:07.897 filename=/dev/nvme0n2 00:12:07.897 [job2] 00:12:07.897 filename=/dev/nvme0n3 00:12:07.897 [job3] 00:12:07.897 filename=/dev/nvme0n4 00:12:07.897 Could not set queue depth (nvme0n1) 00:12:07.897 Could not set queue depth (nvme0n2) 00:12:07.897 Could not set queue depth (nvme0n3) 00:12:07.897 Could not set queue depth (nvme0n4) 00:12:08.160 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.161 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.161 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.161 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:08.161 fio-3.35 00:12:08.161 Starting 4 threads 00:12:09.534 00:12:09.534 job0: (groupid=0, jobs=1): err= 0: pid=757402: Wed Nov 6 08:48:22 2024 00:12:09.534 read: IOPS=4767, BW=18.6MiB/s (19.5MB/s)(19.5MiB/1046msec) 00:12:09.534 slat (usec): min=2, max=9503, avg=98.48, stdev=545.22 00:12:09.534 clat (usec): min=5540, max=54271, avg=13878.63, stdev=6651.14 00:12:09.534 lat (usec): min=5546, max=57587, avg=13977.11, stdev=6665.80 00:12:09.534 clat percentiles (usec): 00:12:09.534 | 1.00th=[ 8717], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11076], 00:12:09.534 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:12:09.534 | 70.00th=[13435], 80.00th=[14615], 90.00th=[16712], 95.00th=[23200], 00:12:09.534 | 99.00th=[50070], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:12:09.534 | 99.99th=[54264] 00:12:09.534 write: IOPS=4894, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1046msec); 0 zone resets 00:12:09.534 slat (usec): min=3, max=8276, avg=89.97, stdev=510.50 00:12:09.534 clat (usec): min=4641, max=28207, avg=12314.25, stdev=2738.57 00:12:09.534 lat (usec): min=5487, max=28232, avg=12404.22, stdev=2769.47 00:12:09.534 clat percentiles (usec): 00:12:09.534 | 1.00th=[ 7046], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[10814], 00:12:09.534 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:12:09.534 | 70.00th=[12518], 80.00th=[14091], 90.00th=[15008], 95.00th=[16909], 00:12:09.534 | 99.00th=[23725], 99.50th=[25822], 99.90th=[28181], 99.95th=[28181], 00:12:09.534 | 99.99th=[28181] 00:12:09.534 bw ( KiB/s): min=20480, max=20480, per=31.32%, avg=20480.00, stdev= 0.00, samples=2 00:12:09.534 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:12:09.534 lat (msec) : 10=8.71%, 20=86.87%, 50=4.01%, 100=0.42% 00:12:09.534 cpu : usr=5.84%, sys=10.43%, ctx=408, majf=0, minf=1 00:12:09.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:09.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.534 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.534 job1: (groupid=0, jobs=1): err= 0: pid=757421: Wed Nov 6 08:48:22 2024 00:12:09.534 read: IOPS=4175, BW=16.3MiB/s (17.1MB/s)(17.0MiB/1045msec) 00:12:09.534 slat (usec): min=2, max=8463, avg=104.84, stdev=653.57 00:12:09.534 clat (usec): min=5101, max=54170, avg=14451.74, stdev=6958.15 00:12:09.534 lat (usec): min=5111, max=57794, avg=14556.58, stdev=6987.17 00:12:09.534 clat percentiles (usec): 00:12:09.534 | 1.00th=[ 7373], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11338], 00:12:09.534 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12780], 00:12:09.534 | 70.00th=[14615], 80.00th=[16319], 90.00th=[20055], 95.00th=[22152], 00:12:09.534 | 99.00th=[49546], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:12:09.534 | 99.99th=[54264] 00:12:09.534 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:12:09.534 slat (usec): min=3, max=41019, avg=108.01, stdev=876.66 00:12:09.534 clat (usec): min=1164, max=120563, avg=14957.28, stdev=15571.14 00:12:09.534 lat (usec): min=1171, max=120579, avg=15065.29, stdev=15650.83 00:12:09.534 clat percentiles (msec): 00:12:09.534 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:12:09.534 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:12:09.534 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 17], 95.00th=[ 23], 00:12:09.534 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 121], 00:12:09.534 | 99.99th=[ 121] 00:12:09.534 bw ( KiB/s): min=17536, max=19328, per=28.18%, avg=18432.00, stdev=1267.14, samples=2 00:12:09.534 iops : min= 4384, max= 4832, avg=4608.00, stdev=316.78, samples=2 00:12:09.534 lat (msec) : 2=0.03%, 4=0.22%, 10=10.51%, 20=81.22%, 50=5.86% 00:12:09.534 lat (msec) : 100=1.36%, 250=0.79% 00:12:09.534 cpu : usr=5.75%, sys=7.66%, ctx=344, majf=0, minf=1 00:12:09.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:09.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.534 issued rwts: total=4363,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.534 job2: (groupid=0, jobs=1): err= 0: pid=757437: Wed Nov 6 08:48:22 2024 00:12:09.534 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:12:09.534 slat (usec): min=3, max=26557, avg=153.68, stdev=1082.58 00:12:09.534 clat (msec): min=7, max=111, avg=19.76, stdev=17.33 00:12:09.534 lat (msec): min=7, max=111, avg=19.92, stdev=17.43 00:12:09.534 clat percentiles (msec): 00:12:09.534 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:12:09.534 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:12:09.534 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 28], 95.00th=[ 64], 00:12:09.534 | 99.00th=[ 97], 99.50th=[ 112], 99.90th=[ 112], 99.95th=[ 112], 00:12:09.534 | 99.99th=[ 112] 00:12:09.534 write: IOPS=3690, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1007msec); 0 zone resets 00:12:09.534 slat (usec): min=4, max=26254, avg=110.40, stdev=733.24 00:12:09.534 clat (usec): min=6120, max=52139, avg=14802.21, stdev=4810.43 00:12:09.534 lat (usec): min=6129, max=52150, avg=14912.61, stdev=4836.65 00:12:09.534 clat percentiles (usec): 00:12:09.534 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[11863], 20.00th=[13042], 00:12:09.534 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[14484], 00:12:09.534 | 70.00th=[14746], 80.00th=[15533], 90.00th=[17433], 95.00th=[19006], 00:12:09.534 | 99.00th=[34866], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:12:09.534 | 99.99th=[52167] 00:12:09.534 bw ( KiB/s): min=12288, max=16488, per=22.00%, avg=14388.00, stdev=2969.85, samples=2 00:12:09.534 iops : min= 3072, max= 4122, avg=3597.00, stdev=742.46, samples=2 00:12:09.534 lat (msec) : 10=3.22%, 20=89.03%, 50=3.19%, 100=4.14%, 250=0.42% 00:12:09.534 cpu : usr=5.27%, sys=7.65%, ctx=338, majf=0, minf=1 00:12:09.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:09.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.534 issued rwts: total=3584,3716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.534 job3: (groupid=0, jobs=1): err= 0: pid=757438: Wed Nov 6 08:48:22 2024 00:12:09.534 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:12:09.534 slat (usec): min=2, max=15109, avg=124.29, stdev=909.18 00:12:09.534 clat (usec): min=4715, max=59991, avg=17592.73, stdev=8246.10 00:12:09.534 lat (usec): min=4729, max=60004, avg=17717.02, stdev=8299.12 00:12:09.534 clat percentiles (usec): 00:12:09.534 | 1.00th=[ 5276], 5.00th=[11469], 10.00th=[13435], 20.00th=[13960], 00:12:09.534 | 30.00th=[14222], 40.00th=[15008], 50.00th=[15533], 60.00th=[16057], 00:12:09.534 | 70.00th=[17433], 80.00th=[18220], 90.00th=[22414], 95.00th=[30802], 00:12:09.534 | 99.00th=[54264], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:12:09.534 | 99.99th=[60031] 00:12:09.534 write: IOPS=3639, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1005msec); 0 zone resets 00:12:09.534 slat (usec): min=3, max=12757, avg=120.24, stdev=875.69 00:12:09.534 clat (usec): min=439, max=95452, avg=17630.17, stdev=13795.35 00:12:09.534 lat (usec): min=2880, max=95462, avg=17750.41, stdev=13854.82 00:12:09.534 clat percentiles (usec): 00:12:09.534 | 1.00th=[ 4080], 5.00th=[ 5669], 10.00th=[ 8029], 20.00th=[10683], 00:12:09.534 | 30.00th=[12256], 40.00th=[13698], 50.00th=[14222], 60.00th=[15008], 00:12:09.534 | 70.00th=[16057], 80.00th=[21103], 90.00th=[28181], 95.00th=[42206], 00:12:09.534 | 99.00th=[89654], 99.50th=[92799], 99.90th=[94897], 99.95th=[95945], 00:12:09.534 | 99.99th=[95945] 00:12:09.534 bw ( KiB/s): min=12288, max=16384, per=21.92%, avg=14336.00, stdev=2896.31, samples=2 00:12:09.534 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:12:09.534 lat (usec) : 500=0.01% 00:12:09.534 lat (msec) : 4=0.40%, 10=10.23%, 20=70.84%, 50=14.98%, 100=3.53% 00:12:09.534 cpu : usr=3.69%, sys=5.88%, ctx=238, majf=0, minf=1 00:12:09.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:12:09.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.534 issued rwts: total=3584,3658,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.534 00:12:09.534 Run status group 0 (all jobs): 00:12:09.534 READ: bw=61.7MiB/s (64.7MB/s), 13.9MiB/s-18.6MiB/s (14.6MB/s-19.5MB/s), io=64.5MiB (67.7MB), run=1005-1046msec 00:12:09.534 WRITE: bw=63.9MiB/s (67.0MB/s), 14.2MiB/s-19.1MiB/s (14.9MB/s-20.0MB/s), io=66.8MiB (70.0MB), run=1005-1046msec 00:12:09.534 00:12:09.534 Disk stats (read/write): 00:12:09.534 nvme0n1: ios=4149/4519, merge=0/0, ticks=21808/20623, in_queue=42431, util=97.70% 00:12:09.534 nvme0n2: ios=3634/3791, merge=0/0, ticks=27267/27246, in_queue=54513, util=98.27% 00:12:09.534 nvme0n3: ios=2925/3072, merge=0/0, ticks=25285/21230, in_queue=46515, util=98.33% 00:12:09.534 nvme0n4: ios=2918/3072, merge=0/0, ticks=36358/43522, in_queue=79880, util=89.71% 00:12:09.534 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:09.534 [global] 00:12:09.534 thread=1 00:12:09.534 invalidate=1 00:12:09.534 rw=randwrite 00:12:09.534 time_based=1 00:12:09.534 runtime=1 00:12:09.534 ioengine=libaio 00:12:09.534 direct=1 00:12:09.534 bs=4096 00:12:09.534 iodepth=128 00:12:09.534 norandommap=0 00:12:09.534 numjobs=1 00:12:09.534 00:12:09.534 verify_dump=1 00:12:09.534 verify_backlog=512 00:12:09.535 verify_state_save=0 00:12:09.535 do_verify=1 00:12:09.535 verify=crc32c-intel 00:12:09.535 [job0] 00:12:09.535 filename=/dev/nvme0n1 00:12:09.535 [job1] 00:12:09.535 filename=/dev/nvme0n2 00:12:09.535 [job2] 00:12:09.535 filename=/dev/nvme0n3 00:12:09.535 [job3] 00:12:09.535 filename=/dev/nvme0n4 00:12:09.535 Could not set queue depth (nvme0n1) 00:12:09.535 Could not set queue depth (nvme0n2) 00:12:09.535 Could not set queue depth (nvme0n3) 00:12:09.535 Could not set queue depth (nvme0n4) 00:12:09.535 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.535 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.535 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.535 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.535 fio-3.35 00:12:09.535 Starting 4 threads 00:12:10.907 00:12:10.907 job0: (groupid=0, jobs=1): err= 0: pid=757664: Wed Nov 6 08:48:23 2024 00:12:10.907 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:12:10.907 slat (usec): min=2, max=14740, avg=169.01, stdev=950.43 00:12:10.907 clat (usec): min=7467, max=84962, avg=20551.18, stdev=14663.92 00:12:10.907 lat (usec): min=7862, max=84966, avg=20720.19, stdev=14767.24 00:12:10.907 clat percentiles (usec): 00:12:10.907 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[11600], 20.00th=[12256], 00:12:10.907 | 30.00th=[12649], 40.00th=[14222], 50.00th=[15664], 60.00th=[17695], 00:12:10.907 | 70.00th=[20055], 80.00th=[23987], 90.00th=[33817], 95.00th=[46924], 00:12:10.907 | 99.00th=[83362], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:12:10.907 | 99.99th=[85459] 00:12:10.907 write: IOPS=2722, BW=10.6MiB/s (11.1MB/s)(10.7MiB/1005msec); 0 zone resets 00:12:10.907 slat (usec): min=3, max=16633, avg=200.94, stdev=939.81 00:12:10.907 clat (usec): min=4173, max=65278, avg=27135.38, stdev=12714.55 00:12:10.907 lat (usec): min=5887, max=65287, avg=27336.32, stdev=12777.70 00:12:10.907 clat percentiles (usec): 00:12:10.907 | 1.00th=[ 8094], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[12649], 00:12:10.907 | 30.00th=[19006], 40.00th=[21627], 50.00th=[25297], 60.00th=[32375], 00:12:10.907 | 70.00th=[34866], 80.00th=[39584], 90.00th=[44303], 95.00th=[47973], 00:12:10.907 | 99.00th=[53740], 99.50th=[53740], 99.90th=[65274], 99.95th=[65274], 00:12:10.907 | 99.99th=[65274] 00:12:10.907 bw ( KiB/s): min= 9872, max=11000, per=18.98%, avg=10436.00, stdev=797.62, samples=2 00:12:10.907 iops : min= 2468, max= 2750, avg=2609.00, stdev=199.40, samples=2 00:12:10.907 lat (msec) : 10=4.74%, 20=44.20%, 50=47.53%, 100=3.53% 00:12:10.907 cpu : usr=1.99%, sys=3.39%, ctx=315, majf=0, minf=1 00:12:10.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:10.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.907 issued rwts: total=2560,2736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.907 job1: (groupid=0, jobs=1): err= 0: pid=757665: Wed Nov 6 08:48:23 2024 00:12:10.907 read: IOPS=4275, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1005msec) 00:12:10.907 slat (usec): min=2, max=29603, avg=105.24, stdev=783.30 00:12:10.907 clat (usec): min=2185, max=51539, avg=13440.24, stdev=6203.27 00:12:10.907 lat (usec): min=4786, max=51550, avg=13545.48, stdev=6249.32 00:12:10.907 clat percentiles (usec): 00:12:10.907 | 1.00th=[ 5276], 5.00th=[ 8094], 10.00th=[ 9241], 20.00th=[ 9896], 00:12:10.907 | 30.00th=[10290], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:12:10.907 | 70.00th=[13566], 80.00th=[15270], 90.00th=[19006], 95.00th=[26346], 00:12:10.907 | 99.00th=[39584], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:12:10.907 | 99.99th=[51643] 00:12:10.907 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:12:10.907 slat (usec): min=3, max=11698, avg=112.29, stdev=655.29 00:12:10.907 clat (usec): min=2879, max=54581, avg=15000.11, stdev=9497.46 00:12:10.907 lat (usec): min=2891, max=54613, avg=15112.40, stdev=9570.19 00:12:10.907 clat percentiles (usec): 00:12:10.907 | 1.00th=[ 5014], 5.00th=[ 6980], 10.00th=[ 8717], 20.00th=[ 9372], 00:12:10.907 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[12125], 00:12:10.907 | 70.00th=[13960], 80.00th=[18482], 90.00th=[23987], 95.00th=[39584], 00:12:10.907 | 99.00th=[52167], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:12:10.907 | 99.99th=[54789] 00:12:10.907 bw ( KiB/s): min=17208, max=19656, per=33.52%, avg=18432.00, stdev=1731.00, samples=2 00:12:10.907 iops : min= 4302, max= 4914, avg=4608.00, stdev=432.75, samples=2 00:12:10.907 lat (msec) : 4=0.19%, 10=23.90%, 20=62.80%, 50=12.43%, 100=0.69% 00:12:10.907 cpu : usr=3.69%, sys=6.27%, ctx=379, majf=0, minf=1 00:12:10.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:10.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.907 issued rwts: total=4297,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.907 job2: (groupid=0, jobs=1): err= 0: pid=757666: Wed Nov 6 08:48:23 2024 00:12:10.907 read: IOPS=1952, BW=7809KiB/s (7996kB/s)(7848KiB/1005msec) 00:12:10.907 slat (usec): min=2, max=25654, avg=284.23, stdev=1443.45 00:12:10.907 clat (usec): min=1933, max=70837, avg=36779.01, stdev=15524.97 00:12:10.907 lat (usec): min=8675, max=70851, avg=37063.24, stdev=15546.95 00:12:10.907 clat percentiles (usec): 00:12:10.907 | 1.00th=[11076], 5.00th=[14877], 10.00th=[20579], 20.00th=[22152], 00:12:10.907 | 30.00th=[25035], 40.00th=[29230], 50.00th=[34866], 60.00th=[39584], 00:12:10.907 | 70.00th=[44827], 80.00th=[51119], 90.00th=[62129], 95.00th=[64226], 00:12:10.907 | 99.00th=[67634], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:12:10.907 | 99.99th=[70779] 00:12:10.907 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:12:10.907 slat (usec): min=4, max=16071, avg=206.08, stdev=981.89 00:12:10.907 clat (usec): min=9001, max=84198, avg=26561.73, stdev=16497.36 00:12:10.907 lat (usec): min=9008, max=84221, avg=26767.82, stdev=16587.09 00:12:10.907 clat percentiles (usec): 00:12:10.907 | 1.00th=[10552], 5.00th=[11207], 10.00th=[13435], 20.00th=[15533], 00:12:10.907 | 30.00th=[17433], 40.00th=[17957], 50.00th=[20055], 60.00th=[21365], 00:12:10.907 | 70.00th=[29492], 80.00th=[37487], 90.00th=[46924], 95.00th=[68682], 00:12:10.907 | 99.00th=[82314], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:12:10.907 | 99.99th=[84411] 00:12:10.907 bw ( KiB/s): min= 7640, max= 8744, per=14.90%, avg=8192.00, stdev=780.65, samples=2 00:12:10.907 iops : min= 1910, max= 2186, avg=2048.00, stdev=195.16, samples=2 00:12:10.907 lat (msec) : 2=0.02%, 10=0.57%, 20=30.77%, 50=54.01%, 100=14.61% 00:12:10.907 cpu : usr=2.99%, sys=4.58%, ctx=220, majf=0, minf=1 00:12:10.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:12:10.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.907 issued rwts: total=1962,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.907 job3: (groupid=0, jobs=1): err= 0: pid=757667: Wed Nov 6 08:48:23 2024 00:12:10.907 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:12:10.907 slat (usec): min=2, max=19373, avg=116.73, stdev=709.10 00:12:10.907 clat (usec): min=2918, max=59363, avg=14053.14, stdev=6551.95 00:12:10.907 lat (usec): min=5133, max=59390, avg=14169.87, stdev=6593.59 00:12:10.907 clat percentiles (usec): 00:12:10.907 | 1.00th=[ 6325], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10814], 00:12:10.907 | 30.00th=[11731], 40.00th=[12649], 50.00th=[13304], 60.00th=[13566], 00:12:10.907 | 70.00th=[13960], 80.00th=[14353], 90.00th=[16712], 95.00th=[26870], 00:12:10.907 | 99.00th=[51643], 99.50th=[56361], 99.90th=[59507], 99.95th=[59507], 00:12:10.907 | 99.99th=[59507] 00:12:10.907 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(17.3MiB/1002msec); 0 zone resets 00:12:10.907 slat (usec): min=3, max=9628, avg=105.02, stdev=557.68 00:12:10.907 clat (usec): min=291, max=59333, avg=15648.70, stdev=9953.43 00:12:10.907 lat (usec): min=478, max=59340, avg=15753.72, stdev=9998.95 00:12:10.907 clat percentiles (usec): 00:12:10.907 | 1.00th=[ 1876], 5.00th=[ 4883], 10.00th=[ 7046], 20.00th=[ 9503], 00:12:10.907 | 30.00th=[10552], 40.00th=[11731], 50.00th=[12387], 60.00th=[13435], 00:12:10.907 | 70.00th=[16581], 80.00th=[20579], 90.00th=[29492], 95.00th=[39060], 00:12:10.907 | 99.00th=[46924], 99.50th=[47973], 99.90th=[50594], 99.95th=[50594], 00:12:10.907 | 99.99th=[59507] 00:12:10.907 bw ( KiB/s): min=13880, max=20480, per=31.25%, avg=17180.00, stdev=4666.90, samples=2 00:12:10.907 iops : min= 3470, max= 5120, avg=4295.00, stdev=1166.73, samples=2 00:12:10.907 lat (usec) : 500=0.04%, 750=0.11%, 1000=0.13% 00:12:10.907 lat (msec) : 2=0.40%, 4=1.30%, 10=17.16%, 20=65.37%, 50=14.87% 00:12:10.907 lat (msec) : 100=0.62% 00:12:10.908 cpu : usr=4.80%, sys=8.29%, ctx=437, majf=0, minf=1 00:12:10.908 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:10.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.908 issued rwts: total=4096,4422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.908 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.908 00:12:10.908 Run status group 0 (all jobs): 00:12:10.908 READ: bw=50.2MiB/s (52.6MB/s), 7809KiB/s-16.7MiB/s (7996kB/s-17.5MB/s), io=50.4MiB (52.9MB), run=1002-1005msec 00:12:10.908 WRITE: bw=53.7MiB/s (56.3MB/s), 8151KiB/s-17.9MiB/s (8347kB/s-18.8MB/s), io=54.0MiB (56.6MB), run=1002-1005msec 00:12:10.908 00:12:10.908 Disk stats (read/write): 00:12:10.908 nvme0n1: ios=2222/2560, merge=0/0, ticks=16844/30436, in_queue=47280, util=98.40% 00:12:10.908 nvme0n2: ios=3634/3653, merge=0/0, ticks=20842/27020, in_queue=47862, util=97.77% 00:12:10.908 nvme0n3: ios=1594/1919, merge=0/0, ticks=15271/11730, in_queue=27001, util=98.65% 00:12:10.908 nvme0n4: ios=3507/3584, merge=0/0, ticks=30622/37380, in_queue=68002, util=95.50% 00:12:10.908 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:10.908 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=757803 00:12:10.908 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:10.908 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:10.908 [global] 00:12:10.908 thread=1 00:12:10.908 invalidate=1 00:12:10.908 rw=read 00:12:10.908 time_based=1 00:12:10.908 runtime=10 00:12:10.908 ioengine=libaio 00:12:10.908 direct=1 00:12:10.908 bs=4096 00:12:10.908 iodepth=1 00:12:10.908 norandommap=1 00:12:10.908 numjobs=1 00:12:10.908 00:12:10.908 [job0] 00:12:10.908 filename=/dev/nvme0n1 00:12:10.908 [job1] 00:12:10.908 filename=/dev/nvme0n2 00:12:10.908 [job2] 00:12:10.908 filename=/dev/nvme0n3 00:12:10.908 [job3] 00:12:10.908 filename=/dev/nvme0n4 00:12:10.908 Could not set queue depth (nvme0n1) 00:12:10.908 Could not set queue depth (nvme0n2) 00:12:10.908 Could not set queue depth (nvme0n3) 00:12:10.908 Could not set queue depth (nvme0n4) 00:12:10.908 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.908 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.908 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.908 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.908 fio-3.35 00:12:10.908 Starting 4 threads 00:12:14.187 08:48:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:14.187 08:48:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:14.187 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=23519232, buflen=4096 00:12:14.187 fio: pid=757902, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:14.445 08:48:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.445 08:48:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:14.445 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=48336896, buflen=4096 00:12:14.445 fio: pid=757901, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:14.703 08:48:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.703 08:48:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:14.703 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44408832, buflen=4096 00:12:14.703 fio: pid=757899, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:14.961 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=20025344, buflen=4096 00:12:14.961 fio: pid=757900, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:12:14.961 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.961 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:14.961 00:12:14.961 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=757899: Wed Nov 6 08:48:28 2024 00:12:14.961 read: IOPS=3052, BW=11.9MiB/s (12.5MB/s)(42.4MiB/3552msec) 00:12:14.961 slat (usec): min=5, max=32287, avg=16.14, stdev=344.62 00:12:14.961 clat (usec): min=167, max=41344, avg=306.15, stdev=1355.33 00:12:14.961 lat (usec): min=173, max=41350, avg=322.29, stdev=1398.38 00:12:14.961 clat percentiles (usec): 00:12:14.961 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:12:14.961 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:12:14.961 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 367], 00:12:14.961 | 99.00th=[ 502], 99.50th=[ 578], 99.90th=[40633], 99.95th=[41157], 00:12:14.961 | 99.99th=[41157] 00:12:14.961 bw ( KiB/s): min= 8256, max=17072, per=39.54%, avg=13742.67, stdev=3682.26, samples=6 00:12:14.961 iops : min= 2064, max= 4268, avg=3435.67, stdev=920.56, samples=6 00:12:14.961 lat (usec) : 250=62.96%, 500=35.99%, 750=0.89%, 1000=0.01% 00:12:14.961 lat (msec) : 2=0.03%, 50=0.11% 00:12:14.961 cpu : usr=2.37%, sys=5.66%, ctx=10847, majf=0, minf=2 00:12:14.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.961 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.961 issued rwts: total=10843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.961 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=757900: Wed Nov 6 08:48:28 2024 00:12:14.961 read: IOPS=1276, BW=5106KiB/s (5229kB/s)(19.1MiB/3830msec) 00:12:14.961 slat (usec): min=5, max=25936, avg=23.47, stdev=421.45 00:12:14.961 clat (usec): min=179, max=42053, avg=756.63, stdev=4500.18 00:12:14.961 lat (usec): min=185, max=67968, avg=778.73, stdev=4598.06 00:12:14.961 clat percentiles (usec): 00:12:14.961 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:12:14.961 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 258], 00:12:14.961 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 334], 00:12:14.961 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:12:14.961 | 99.99th=[42206] 00:12:14.961 bw ( KiB/s): min= 86, max=14912, per=16.05%, avg=5578.00, stdev=7086.97, samples=7 00:12:14.961 iops : min= 21, max= 3728, avg=1394.43, stdev=1771.81, samples=7 00:12:14.961 lat (usec) : 250=49.63%, 500=49.04%, 750=0.06%, 1000=0.04% 00:12:14.961 lat (msec) : 50=1.21% 00:12:14.961 cpu : usr=0.86%, sys=2.04%, ctx=4894, majf=0, minf=1 00:12:14.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.961 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.961 issued rwts: total=4890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.961 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=757901: Wed Nov 6 08:48:28 2024 00:12:14.961 read: IOPS=3649, BW=14.3MiB/s (14.9MB/s)(46.1MiB/3234msec) 00:12:14.961 slat (usec): min=4, max=15785, avg=12.56, stdev=145.55 00:12:14.961 clat (usec): min=178, max=41355, avg=256.80, stdev=835.21 00:12:14.961 lat (usec): min=185, max=41361, avg=269.36, stdev=847.84 00:12:14.961 clat percentiles (usec): 00:12:14.961 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:12:14.961 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:12:14.961 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 281], 95.00th=[ 322], 00:12:14.961 | 99.00th=[ 441], 99.50th=[ 486], 99.90th=[ 537], 99.95th=[ 3326], 00:12:14.961 | 99.99th=[41157] 00:12:14.961 bw ( KiB/s): min=11688, max=17400, per=43.20%, avg=15012.00, stdev=1921.36, samples=6 00:12:14.961 iops : min= 2922, max= 4350, avg=3753.00, stdev=480.34, samples=6 00:12:14.961 lat (usec) : 250=72.80%, 500=26.92%, 750=0.21% 00:12:14.961 lat (msec) : 2=0.01%, 4=0.01%, 50=0.04% 00:12:14.961 cpu : usr=2.38%, sys=4.58%, ctx=11805, majf=0, minf=2 00:12:14.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.961 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.961 issued rwts: total=11802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.961 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=757902: Wed Nov 6 08:48:28 2024 00:12:14.961 read: IOPS=1949, BW=7796KiB/s (7983kB/s)(22.4MiB/2946msec) 00:12:14.961 slat (nsec): min=6643, max=52050, avg=12409.62, stdev=5961.06 00:12:14.961 clat (usec): min=221, max=41561, avg=492.43, stdev=2680.35 00:12:14.961 lat (usec): min=229, max=41577, avg=504.84, stdev=2680.86 00:12:14.961 clat percentiles (usec): 00:12:14.961 | 1.00th=[ 239], 5.00th=[ 258], 10.00th=[ 269], 20.00th=[ 285], 00:12:14.961 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:12:14.961 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 371], 00:12:14.961 | 99.00th=[ 433], 99.50th=[ 553], 99.90th=[41157], 99.95th=[41681], 00:12:14.961 | 99.99th=[41681] 00:12:14.961 bw ( KiB/s): min= 216, max=12152, per=20.52%, avg=7132.80, stdev=6313.74, samples=5 00:12:14.961 iops : min= 54, max= 3038, avg=1783.20, stdev=1578.44, samples=5 00:12:14.961 lat (usec) : 250=2.77%, 500=96.64%, 750=0.12% 00:12:14.961 lat (msec) : 2=0.02%, 50=0.44% 00:12:14.961 cpu : usr=1.66%, sys=3.70%, ctx=5743, majf=0, minf=2 00:12:14.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.961 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.962 issued rwts: total=5743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.962 00:12:14.962 Run status group 0 (all jobs): 00:12:14.962 READ: bw=33.9MiB/s (35.6MB/s), 5106KiB/s-14.3MiB/s (5229kB/s-14.9MB/s), io=130MiB (136MB), run=2946-3830msec 00:12:14.962 00:12:14.962 Disk stats (read/write): 00:12:14.962 nvme0n1: ios=10837/0, merge=0/0, ticks=3019/0, in_queue=3019, util=94.71% 00:12:14.962 nvme0n2: ios=4929/0, merge=0/0, ticks=4568/0, in_queue=4568, util=98.90% 00:12:14.962 nvme0n3: ios=11434/0, merge=0/0, ticks=2818/0, in_queue=2818, util=96.33% 00:12:14.962 nvme0n4: ios=5542/0, merge=0/0, ticks=2694/0, in_queue=2694, util=96.72% 00:12:15.220 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.220 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:15.478 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.478 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:15.736 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.736 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:15.994 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.994 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:16.252 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 757803 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:16.510 nvmf hotplug test: fio failed as expected 00:12:16.510 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.769 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.769 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.769 rmmod nvme_tcp 00:12:16.769 rmmod nvme_fabrics 00:12:16.769 rmmod nvme_keyring 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 755773 ']' 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 755773 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 755773 ']' 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 755773 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 755773 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 755773' 00:12:17.027 killing process with pid 755773 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 755773 00:12:17.027 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 755773 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.286 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.192 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.192 00:12:19.192 real 0m24.151s 00:12:19.192 user 1m23.978s 00:12:19.192 sys 0m7.871s 00:12:19.192 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.192 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.192 ************************************ 00:12:19.192 END TEST nvmf_fio_target 00:12:19.192 ************************************ 00:12:19.192 08:48:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:19.192 08:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:19.192 08:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.192 08:48:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:19.192 ************************************ 00:12:19.192 START TEST nvmf_bdevio 00:12:19.192 ************************************ 00:12:19.192 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:19.453 * Looking for test storage... 00:12:19.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lcov --version 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:19.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.453 --rc genhtml_branch_coverage=1 00:12:19.453 --rc genhtml_function_coverage=1 00:12:19.453 --rc genhtml_legend=1 00:12:19.453 --rc geninfo_all_blocks=1 00:12:19.453 --rc geninfo_unexecuted_blocks=1 00:12:19.453 00:12:19.453 ' 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:19.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.453 --rc genhtml_branch_coverage=1 00:12:19.453 --rc genhtml_function_coverage=1 00:12:19.453 --rc genhtml_legend=1 00:12:19.453 --rc geninfo_all_blocks=1 00:12:19.453 --rc geninfo_unexecuted_blocks=1 00:12:19.453 00:12:19.453 ' 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:19.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.453 --rc genhtml_branch_coverage=1 00:12:19.453 --rc genhtml_function_coverage=1 00:12:19.453 --rc genhtml_legend=1 00:12:19.453 --rc geninfo_all_blocks=1 00:12:19.453 --rc geninfo_unexecuted_blocks=1 00:12:19.453 00:12:19.453 ' 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:19.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.453 --rc genhtml_branch_coverage=1 00:12:19.453 --rc genhtml_function_coverage=1 00:12:19.453 --rc genhtml_legend=1 00:12:19.453 --rc geninfo_all_blocks=1 00:12:19.453 --rc geninfo_unexecuted_blocks=1 00:12:19.453 00:12:19.453 ' 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.453 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.454 08:48:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:21.990 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:21.990 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:21.990 Found net devices under 0000:09:00.0: cvl_0_0 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:21.990 Found net devices under 0000:09:00.1: cvl_0_1 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:21.990 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:12:21.991 00:12:21.991 --- 10.0.0.2 ping statistics --- 00:12:21.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.991 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:12:21.991 00:12:21.991 --- 10.0.0.1 ping statistics --- 00:12:21.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.991 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=760655 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 760655 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 760655 ']' 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.991 08:48:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.991 [2024-11-06 08:48:34.969275] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:12:21.991 [2024-11-06 08:48:34.969345] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.991 [2024-11-06 08:48:35.039688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.991 [2024-11-06 08:48:35.096924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.991 [2024-11-06 08:48:35.096974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.991 [2024-11-06 08:48:35.096997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.991 [2024-11-06 08:48:35.097007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.991 [2024-11-06 08:48:35.097017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.991 [2024-11-06 08:48:35.098639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:21.991 [2024-11-06 08:48:35.098700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:21.991 [2024-11-06 08:48:35.098768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:21.991 [2024-11-06 08:48:35.098771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.991 [2024-11-06 08:48:35.247015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.991 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:22.249 Malloc0 00:12:22.249 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.249 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.249 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.249 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:22.249 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.249 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.249 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.249 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:22.250 [2024-11-06 08:48:35.316473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:22.250 { 00:12:22.250 "params": { 00:12:22.250 "name": "Nvme$subsystem", 00:12:22.250 "trtype": "$TEST_TRANSPORT", 00:12:22.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.250 "adrfam": "ipv4", 00:12:22.250 "trsvcid": "$NVMF_PORT", 00:12:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.250 "hdgst": ${hdgst:-false}, 00:12:22.250 "ddgst": ${ddgst:-false} 00:12:22.250 }, 00:12:22.250 "method": "bdev_nvme_attach_controller" 00:12:22.250 } 00:12:22.250 EOF 00:12:22.250 )") 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:12:22.250 08:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:22.250 "params": { 00:12:22.250 "name": "Nvme1", 00:12:22.250 "trtype": "tcp", 00:12:22.250 "traddr": "10.0.0.2", 00:12:22.250 "adrfam": "ipv4", 00:12:22.250 "trsvcid": "4420", 00:12:22.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.250 "hdgst": false, 00:12:22.250 "ddgst": false 00:12:22.250 }, 00:12:22.250 "method": "bdev_nvme_attach_controller" 00:12:22.250 }' 00:12:22.250 [2024-11-06 08:48:35.367456] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:12:22.250 [2024-11-06 08:48:35.367535] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760686 ] 00:12:22.250 [2024-11-06 08:48:35.436910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.250 [2024-11-06 08:48:35.502542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.250 [2024-11-06 08:48:35.502597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.250 [2024-11-06 08:48:35.502602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.508 I/O targets: 00:12:22.508 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:22.508 00:12:22.508 00:12:22.508 CUnit - A unit testing framework for C - Version 2.1-3 00:12:22.508 http://cunit.sourceforge.net/ 00:12:22.508 00:12:22.508 00:12:22.508 Suite: bdevio tests on: Nvme1n1 00:12:22.767 Test: blockdev write read block ...passed 00:12:22.767 Test: blockdev write zeroes read block ...passed 00:12:22.767 Test: blockdev write zeroes read no split ...passed 00:12:22.767 Test: blockdev write zeroes read split ...passed 00:12:22.767 Test: blockdev write zeroes read split partial ...passed 00:12:22.767 Test: blockdev reset ...[2024-11-06 08:48:35.932756] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:22.767 [2024-11-06 08:48:35.932870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c0640 (9): Bad file descriptor 00:12:22.767 [2024-11-06 08:48:35.948424] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:22.767 passed 00:12:22.767 Test: blockdev write read 8 blocks ...passed 00:12:22.767 Test: blockdev write read size > 128k ...passed 00:12:22.767 Test: blockdev write read invalid size ...passed 00:12:22.767 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:22.767 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:22.767 Test: blockdev write read max offset ...passed 00:12:23.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:23.025 Test: blockdev writev readv 8 blocks ...passed 00:12:23.025 Test: blockdev writev readv 30 x 1block ...passed 00:12:23.025 Test: blockdev writev readv block ...passed 00:12:23.025 Test: blockdev writev readv size > 128k ...passed 00:12:23.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:23.025 Test: blockdev comparev and writev ...[2024-11-06 08:48:36.161034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.025 [2024-11-06 08:48:36.161070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:23.025 [2024-11-06 08:48:36.161095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.025 [2024-11-06 08:48:36.161112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:23.025 [2024-11-06 08:48:36.161443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.025 [2024-11-06 08:48:36.161468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:23.025 [2024-11-06 08:48:36.161490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.025 [2024-11-06 08:48:36.161506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:23.025 [2024-11-06 08:48:36.161822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.025 [2024-11-06 08:48:36.161856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:23.025 [2024-11-06 08:48:36.161879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.025 [2024-11-06 08:48:36.161895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:23.025 [2024-11-06 08:48:36.162228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.025 [2024-11-06 08:48:36.162253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:23.025 [2024-11-06 08:48:36.162275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:23.025 [2024-11-06 08:48:36.162291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:23.025 passed 00:12:23.025 Test: blockdev nvme passthru rw ...passed 00:12:23.025 Test: blockdev nvme passthru vendor specific ...[2024-11-06 08:48:36.244090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:23.026 [2024-11-06 08:48:36.244118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:23.026 [2024-11-06 08:48:36.244251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:23.026 [2024-11-06 08:48:36.244274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:23.026 [2024-11-06 08:48:36.244415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:23.026 [2024-11-06 08:48:36.244438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:23.026 [2024-11-06 08:48:36.244570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:23.026 [2024-11-06 08:48:36.244593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:23.026 passed 00:12:23.026 Test: blockdev nvme admin passthru ...passed 00:12:23.026 Test: blockdev copy ...passed 00:12:23.026 00:12:23.026 Run Summary: Type Total Ran Passed Failed Inactive 00:12:23.026 suites 1 1 n/a 0 0 00:12:23.026 tests 23 23 23 0 0 00:12:23.026 asserts 152 152 152 0 n/a 00:12:23.026 00:12:23.026 Elapsed time = 1.052 seconds 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:23.284 rmmod nvme_tcp 00:12:23.284 rmmod nvme_fabrics 00:12:23.284 rmmod nvme_keyring 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 760655 ']' 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 760655 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 760655 ']' 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 760655 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.284 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 760655 00:12:23.542 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:23.542 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:23.542 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 760655' 00:12:23.542 killing process with pid 760655 00:12:23.542 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 760655 00:12:23.542 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 760655 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.801 08:48:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.707 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:25.707 00:12:25.707 real 0m6.473s 00:12:25.707 user 0m10.090s 00:12:25.707 sys 0m2.174s 00:12:25.707 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.707 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:25.707 ************************************ 00:12:25.707 END TEST nvmf_bdevio 00:12:25.707 ************************************ 00:12:25.707 08:48:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:25.707 00:12:25.707 real 3m55.823s 00:12:25.707 user 10m13.345s 00:12:25.707 sys 1m8.591s 00:12:25.707 08:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.707 08:48:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:25.707 ************************************ 00:12:25.707 END TEST nvmf_target_core 00:12:25.707 ************************************ 00:12:25.707 08:48:38 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:25.707 08:48:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.707 08:48:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.707 08:48:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.707 ************************************ 00:12:25.707 START TEST nvmf_target_extra 00:12:25.707 ************************************ 00:12:25.707 08:48:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:25.966 * Looking for test storage... 00:12:25.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1689 -- # lcov --version 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.966 --rc genhtml_branch_coverage=1 00:12:25.966 --rc genhtml_function_coverage=1 00:12:25.966 --rc genhtml_legend=1 00:12:25.966 --rc geninfo_all_blocks=1 00:12:25.966 --rc geninfo_unexecuted_blocks=1 00:12:25.966 00:12:25.966 ' 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.966 --rc genhtml_branch_coverage=1 00:12:25.966 --rc genhtml_function_coverage=1 00:12:25.966 --rc genhtml_legend=1 00:12:25.966 --rc geninfo_all_blocks=1 00:12:25.966 --rc geninfo_unexecuted_blocks=1 00:12:25.966 00:12:25.966 ' 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.966 --rc genhtml_branch_coverage=1 00:12:25.966 --rc genhtml_function_coverage=1 00:12:25.966 --rc genhtml_legend=1 00:12:25.966 --rc geninfo_all_blocks=1 00:12:25.966 --rc geninfo_unexecuted_blocks=1 00:12:25.966 00:12:25.966 ' 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:25.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:25.966 --rc genhtml_branch_coverage=1 00:12:25.966 --rc genhtml_function_coverage=1 00:12:25.966 --rc genhtml_legend=1 00:12:25.966 --rc geninfo_all_blocks=1 00:12:25.966 --rc geninfo_unexecuted_blocks=1 00:12:25.966 00:12:25.966 ' 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:25.966 08:48:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:25.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.967 ************************************ 00:12:25.967 START TEST nvmf_example 00:12:25.967 ************************************ 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:25.967 * Looking for test storage... 00:12:25.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # lcov --version 00:12:25.967 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:26.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.226 --rc genhtml_branch_coverage=1 00:12:26.226 --rc genhtml_function_coverage=1 00:12:26.226 --rc genhtml_legend=1 00:12:26.226 --rc geninfo_all_blocks=1 00:12:26.226 --rc geninfo_unexecuted_blocks=1 00:12:26.226 00:12:26.226 ' 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:26.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.226 --rc genhtml_branch_coverage=1 00:12:26.226 --rc genhtml_function_coverage=1 00:12:26.226 --rc genhtml_legend=1 00:12:26.226 --rc geninfo_all_blocks=1 00:12:26.226 --rc geninfo_unexecuted_blocks=1 00:12:26.226 00:12:26.226 ' 00:12:26.226 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:26.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.226 --rc genhtml_branch_coverage=1 00:12:26.226 --rc genhtml_function_coverage=1 00:12:26.226 --rc genhtml_legend=1 00:12:26.226 --rc geninfo_all_blocks=1 00:12:26.227 --rc geninfo_unexecuted_blocks=1 00:12:26.227 00:12:26.227 ' 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:26.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.227 --rc genhtml_branch_coverage=1 00:12:26.227 --rc genhtml_function_coverage=1 00:12:26.227 --rc genhtml_legend=1 00:12:26.227 --rc geninfo_all_blocks=1 00:12:26.227 --rc geninfo_unexecuted_blocks=1 00:12:26.227 00:12:26.227 ' 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.227 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.228 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.228 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:26.228 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:26.228 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:26.228 08:48:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:28.762 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:28.763 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:28.763 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:28.763 Found net devices under 0000:09:00.0: cvl_0_0 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:28.763 Found net devices under 0000:09:00.1: cvl_0_1 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:28.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:12:28.763 00:12:28.763 --- 10.0.0.2 ping statistics --- 00:12:28.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.763 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:12:28.763 00:12:28.763 --- 10.0.0.1 ping statistics --- 00:12:28.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.763 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=762893 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 762893 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 762893 ']' 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.763 08:48:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:29.697 08:48:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:41.913 Initializing NVMe Controllers 00:12:41.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:41.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:41.913 Initialization complete. Launching workers. 00:12:41.913 ======================================================== 00:12:41.913 Latency(us) 00:12:41.913 Device Information : IOPS MiB/s Average min max 00:12:41.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14446.36 56.43 4431.90 641.47 15731.02 00:12:41.913 ======================================================== 00:12:41.913 Total : 14446.36 56.43 4431.90 641.47 15731.02 00:12:41.913 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.913 rmmod nvme_tcp 00:12:41.913 rmmod nvme_fabrics 00:12:41.913 rmmod nvme_keyring 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 762893 ']' 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 762893 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 762893 ']' 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 762893 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 762893 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 762893' 00:12:41.913 killing process with pid 762893 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 762893 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 762893 00:12:41.913 nvmf threads initialize successfully 00:12:41.913 bdev subsystem init successfully 00:12:41.913 created a nvmf target service 00:12:41.913 create targets's poll groups done 00:12:41.913 all subsystems of target started 00:12:41.913 nvmf target is running 00:12:41.913 all subsystems of target stopped 00:12:41.913 destroy targets's poll groups done 00:12:41.913 destroyed the nvmf target service 00:12:41.913 bdev subsystem finish successfully 00:12:41.913 nvmf threads destroy successfully 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.913 08:48:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:42.485 00:12:42.485 real 0m16.352s 00:12:42.485 user 0m45.162s 00:12:42.485 sys 0m3.798s 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:42.485 ************************************ 00:12:42.485 END TEST nvmf_example 00:12:42.485 ************************************ 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.485 ************************************ 00:12:42.485 START TEST nvmf_filesystem 00:12:42.485 ************************************ 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:42.485 * Looking for test storage... 00:12:42.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lcov --version 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.485 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.485 --rc genhtml_branch_coverage=1 00:12:42.485 --rc genhtml_function_coverage=1 00:12:42.485 --rc genhtml_legend=1 00:12:42.485 --rc geninfo_all_blocks=1 00:12:42.486 --rc geninfo_unexecuted_blocks=1 00:12:42.486 00:12:42.486 ' 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:42.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.486 --rc genhtml_branch_coverage=1 00:12:42.486 --rc genhtml_function_coverage=1 00:12:42.486 --rc genhtml_legend=1 00:12:42.486 --rc geninfo_all_blocks=1 00:12:42.486 --rc geninfo_unexecuted_blocks=1 00:12:42.486 00:12:42.486 ' 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:42.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.486 --rc genhtml_branch_coverage=1 00:12:42.486 --rc genhtml_function_coverage=1 00:12:42.486 --rc genhtml_legend=1 00:12:42.486 --rc geninfo_all_blocks=1 00:12:42.486 --rc geninfo_unexecuted_blocks=1 00:12:42.486 00:12:42.486 ' 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:42.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.486 --rc genhtml_branch_coverage=1 00:12:42.486 --rc genhtml_function_coverage=1 00:12:42.486 --rc genhtml_legend=1 00:12:42.486 --rc geninfo_all_blocks=1 00:12:42.486 --rc geninfo_unexecuted_blocks=1 00:12:42.486 00:12:42.486 ' 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:42.486 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:42.487 #define SPDK_CONFIG_H 00:12:42.487 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:42.487 #define SPDK_CONFIG_APPS 1 00:12:42.487 #define SPDK_CONFIG_ARCH native 00:12:42.487 #undef SPDK_CONFIG_ASAN 00:12:42.487 #undef SPDK_CONFIG_AVAHI 00:12:42.487 #undef SPDK_CONFIG_CET 00:12:42.487 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:42.487 #define SPDK_CONFIG_COVERAGE 1 00:12:42.487 #define SPDK_CONFIG_CROSS_PREFIX 00:12:42.487 #undef SPDK_CONFIG_CRYPTO 00:12:42.487 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:42.487 #undef SPDK_CONFIG_CUSTOMOCF 00:12:42.487 #undef SPDK_CONFIG_DAOS 00:12:42.487 #define SPDK_CONFIG_DAOS_DIR 00:12:42.487 #define SPDK_CONFIG_DEBUG 1 00:12:42.487 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:42.487 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:42.487 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:42.487 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:42.487 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:42.487 #undef SPDK_CONFIG_DPDK_UADK 00:12:42.487 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:42.487 #define SPDK_CONFIG_EXAMPLES 1 00:12:42.487 #undef SPDK_CONFIG_FC 00:12:42.487 #define SPDK_CONFIG_FC_PATH 00:12:42.487 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:42.487 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:42.487 #define SPDK_CONFIG_FSDEV 1 00:12:42.487 #undef SPDK_CONFIG_FUSE 00:12:42.487 #undef SPDK_CONFIG_FUZZER 00:12:42.487 #define SPDK_CONFIG_FUZZER_LIB 00:12:42.487 #undef SPDK_CONFIG_GOLANG 00:12:42.487 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:42.487 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:42.487 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:42.487 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:42.487 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:42.487 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:42.487 #undef SPDK_CONFIG_HAVE_LZ4 00:12:42.487 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:42.487 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:42.487 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:42.487 #define SPDK_CONFIG_IDXD 1 00:12:42.487 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:42.487 #undef SPDK_CONFIG_IPSEC_MB 00:12:42.487 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:42.487 #define SPDK_CONFIG_ISAL 1 00:12:42.487 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:42.487 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:42.487 #define SPDK_CONFIG_LIBDIR 00:12:42.487 #undef SPDK_CONFIG_LTO 00:12:42.487 #define SPDK_CONFIG_MAX_LCORES 128 00:12:42.487 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:42.487 #define SPDK_CONFIG_NVME_CUSE 1 00:12:42.487 #undef SPDK_CONFIG_OCF 00:12:42.487 #define SPDK_CONFIG_OCF_PATH 00:12:42.487 #define SPDK_CONFIG_OPENSSL_PATH 00:12:42.487 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:42.487 #define SPDK_CONFIG_PGO_DIR 00:12:42.487 #undef SPDK_CONFIG_PGO_USE 00:12:42.487 #define SPDK_CONFIG_PREFIX /usr/local 00:12:42.487 #undef SPDK_CONFIG_RAID5F 00:12:42.487 #undef SPDK_CONFIG_RBD 00:12:42.487 #define SPDK_CONFIG_RDMA 1 00:12:42.487 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:42.487 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:42.487 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:42.487 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:42.487 #define SPDK_CONFIG_SHARED 1 00:12:42.487 #undef SPDK_CONFIG_SMA 00:12:42.487 #define SPDK_CONFIG_TESTS 1 00:12:42.487 #undef SPDK_CONFIG_TSAN 00:12:42.487 #define SPDK_CONFIG_UBLK 1 00:12:42.487 #define SPDK_CONFIG_UBSAN 1 00:12:42.487 #undef SPDK_CONFIG_UNIT_TESTS 00:12:42.487 #undef SPDK_CONFIG_URING 00:12:42.487 #define SPDK_CONFIG_URING_PATH 00:12:42.487 #undef SPDK_CONFIG_URING_ZNS 00:12:42.487 #undef SPDK_CONFIG_USDT 00:12:42.487 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:42.487 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:42.487 #define SPDK_CONFIG_VFIO_USER 1 00:12:42.487 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:42.487 #define SPDK_CONFIG_VHOST 1 00:12:42.487 #define SPDK_CONFIG_VIRTIO 1 00:12:42.487 #undef SPDK_CONFIG_VTUNE 00:12:42.487 #define SPDK_CONFIG_VTUNE_DIR 00:12:42.487 #define SPDK_CONFIG_WERROR 1 00:12:42.487 #define SPDK_CONFIG_WPDK_DIR 00:12:42.487 #undef SPDK_CONFIG_XNVME 00:12:42.487 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.487 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:42.488 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:42.489 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 764651 ]] 00:12:42.490 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 764651 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1674 -- # set_test_storage 2147483648 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.do9DuU 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.do9DuU/tests/target /tmp/spdk.do9DuU 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:12:42.750 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=56098799616 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988511744 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5889712128 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30984224768 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994255872 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375265280 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397703168 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22437888 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993797120 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994255872 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=458752 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:12:42.751 * Looking for test storage... 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=56098799616 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8104304640 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set -o errtrace 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1677 -- # shopt -s extdebug 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # true 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # xtrace_fd 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lcov --version 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:12:42.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.751 --rc genhtml_branch_coverage=1 00:12:42.751 --rc genhtml_function_coverage=1 00:12:42.751 --rc genhtml_legend=1 00:12:42.751 --rc geninfo_all_blocks=1 00:12:42.751 --rc geninfo_unexecuted_blocks=1 00:12:42.751 00:12:42.751 ' 00:12:42.751 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:12:42.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.751 --rc genhtml_branch_coverage=1 00:12:42.751 --rc genhtml_function_coverage=1 00:12:42.751 --rc genhtml_legend=1 00:12:42.752 --rc geninfo_all_blocks=1 00:12:42.752 --rc geninfo_unexecuted_blocks=1 00:12:42.752 00:12:42.752 ' 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:12:42.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.752 --rc genhtml_branch_coverage=1 00:12:42.752 --rc genhtml_function_coverage=1 00:12:42.752 --rc genhtml_legend=1 00:12:42.752 --rc geninfo_all_blocks=1 00:12:42.752 --rc geninfo_unexecuted_blocks=1 00:12:42.752 00:12:42.752 ' 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:12:42.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.752 --rc genhtml_branch_coverage=1 00:12:42.752 --rc genhtml_function_coverage=1 00:12:42.752 --rc genhtml_legend=1 00:12:42.752 --rc geninfo_all_blocks=1 00:12:42.752 --rc geninfo_unexecuted_blocks=1 00:12:42.752 00:12:42.752 ' 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.752 08:48:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:45.286 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:45.286 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:45.286 Found net devices under 0000:09:00.0: cvl_0_0 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:45.286 Found net devices under 0000:09:00.1: cvl_0_1 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.286 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:12:45.286 00:12:45.287 --- 10.0.0.2 ping statistics --- 00:12:45.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.287 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:12:45.287 00:12:45.287 --- 10.0.0.1 ping statistics --- 00:12:45.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.287 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.287 ************************************ 00:12:45.287 START TEST nvmf_filesystem_no_in_capsule 00:12:45.287 ************************************ 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=766292 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 766292 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 766292 ']' 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.287 [2024-11-06 08:48:58.271594] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:12:45.287 [2024-11-06 08:48:58.271679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.287 [2024-11-06 08:48:58.342400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.287 [2024-11-06 08:48:58.400270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.287 [2024-11-06 08:48:58.400327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.287 [2024-11-06 08:48:58.400340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.287 [2024-11-06 08:48:58.400351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.287 [2024-11-06 08:48:58.400360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.287 [2024-11-06 08:48:58.405853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.287 [2024-11-06 08:48:58.405920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.287 [2024-11-06 08:48:58.405987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.287 [2024-11-06 08:48:58.405991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.287 [2024-11-06 08:48:58.553801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.287 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.546 Malloc1 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.546 [2024-11-06 08:48:58.746349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:45.546 { 00:12:45.546 "name": "Malloc1", 00:12:45.546 "aliases": [ 00:12:45.546 "4f14041b-4a84-40a0-99a7-793fb54ce9fd" 00:12:45.546 ], 00:12:45.546 "product_name": "Malloc disk", 00:12:45.546 "block_size": 512, 00:12:45.546 "num_blocks": 1048576, 00:12:45.546 "uuid": "4f14041b-4a84-40a0-99a7-793fb54ce9fd", 00:12:45.546 "assigned_rate_limits": { 00:12:45.546 "rw_ios_per_sec": 0, 00:12:45.546 "rw_mbytes_per_sec": 0, 00:12:45.546 "r_mbytes_per_sec": 0, 00:12:45.546 "w_mbytes_per_sec": 0 00:12:45.546 }, 00:12:45.546 "claimed": true, 00:12:45.546 "claim_type": "exclusive_write", 00:12:45.546 "zoned": false, 00:12:45.546 "supported_io_types": { 00:12:45.546 "read": true, 00:12:45.546 "write": true, 00:12:45.546 "unmap": true, 00:12:45.546 "flush": true, 00:12:45.546 "reset": true, 00:12:45.546 "nvme_admin": false, 00:12:45.546 "nvme_io": false, 00:12:45.546 "nvme_io_md": false, 00:12:45.546 "write_zeroes": true, 00:12:45.546 "zcopy": true, 00:12:45.546 "get_zone_info": false, 00:12:45.546 "zone_management": false, 00:12:45.546 "zone_append": false, 00:12:45.546 "compare": false, 00:12:45.546 "compare_and_write": false, 00:12:45.546 "abort": true, 00:12:45.546 "seek_hole": false, 00:12:45.546 "seek_data": false, 00:12:45.546 "copy": true, 00:12:45.546 "nvme_iov_md": false 00:12:45.546 }, 00:12:45.546 "memory_domains": [ 00:12:45.546 { 00:12:45.546 "dma_device_id": "system", 00:12:45.546 "dma_device_type": 1 00:12:45.546 }, 00:12:45.546 { 00:12:45.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.546 "dma_device_type": 2 00:12:45.546 } 00:12:45.546 ], 00:12:45.546 "driver_specific": {} 00:12:45.546 } 00:12:45.546 ]' 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:45.546 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:45.804 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:45.804 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:45.804 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:45.804 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:45.804 08:48:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.370 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.370 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.370 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.370 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.370 08:48:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:48.270 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:48.270 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:48.270 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.270 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:48.270 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.270 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:48.270 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:48.270 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:48.270 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:48.528 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:48.528 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:48.528 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:48.528 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:48.528 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:48.528 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:48.528 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:48.528 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:48.528 08:49:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:49.460 08:49:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.393 ************************************ 00:12:50.393 START TEST filesystem_ext4 00:12:50.393 ************************************ 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:50.393 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:50.394 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:50.394 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:50.394 08:49:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:50.394 mke2fs 1.47.0 (5-Feb-2023) 00:12:50.652 Discarding device blocks: 0/522240 done 00:12:50.652 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:50.652 Filesystem UUID: 56965c90-57b3-4aaf-9d11-aa0999d6ed94 00:12:50.652 Superblock backups stored on blocks: 00:12:50.652 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:50.652 00:12:50.652 Allocating group tables: 0/64 done 00:12:50.652 Writing inode tables: 0/64 done 00:12:50.909 Creating journal (8192 blocks): done 00:12:50.910 Writing superblocks and filesystem accounting information: 0/64 done 00:12:50.910 00:12:50.910 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:50.910 08:49:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 766292 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:56.168 00:12:56.168 real 0m5.786s 00:12:56.168 user 0m0.016s 00:12:56.168 sys 0m0.066s 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.168 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:56.168 ************************************ 00:12:56.168 END TEST filesystem_ext4 00:12:56.168 ************************************ 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:56.426 ************************************ 00:12:56.426 START TEST filesystem_btrfs 00:12:56.426 ************************************ 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:56.426 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:56.683 btrfs-progs v6.8.1 00:12:56.683 See https://btrfs.readthedocs.io for more information. 00:12:56.683 00:12:56.683 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:56.683 NOTE: several default settings have changed in version 5.15, please make sure 00:12:56.683 this does not affect your deployments: 00:12:56.683 - DUP for metadata (-m dup) 00:12:56.683 - enabled no-holes (-O no-holes) 00:12:56.683 - enabled free-space-tree (-R free-space-tree) 00:12:56.683 00:12:56.683 Label: (null) 00:12:56.683 UUID: 3159cfa8-9c5c-4ee4-ba46-fed69b3d9772 00:12:56.683 Node size: 16384 00:12:56.683 Sector size: 4096 (CPU page size: 4096) 00:12:56.683 Filesystem size: 510.00MiB 00:12:56.683 Block group profiles: 00:12:56.683 Data: single 8.00MiB 00:12:56.683 Metadata: DUP 32.00MiB 00:12:56.683 System: DUP 8.00MiB 00:12:56.683 SSD detected: yes 00:12:56.683 Zoned device: no 00:12:56.683 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:56.683 Checksum: crc32c 00:12:56.683 Number of devices: 1 00:12:56.683 Devices: 00:12:56.683 ID SIZE PATH 00:12:56.683 1 510.00MiB /dev/nvme0n1p1 00:12:56.683 00:12:56.683 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:56.683 08:49:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 766292 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.616 00:12:57.616 real 0m1.255s 00:12:57.616 user 0m0.021s 00:12:57.616 sys 0m0.109s 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:57.616 ************************************ 00:12:57.616 END TEST filesystem_btrfs 00:12:57.616 ************************************ 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.616 ************************************ 00:12:57.616 START TEST filesystem_xfs 00:12:57.616 ************************************ 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:57.616 08:49:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:57.616 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:57.616 = sectsz=512 attr=2, projid32bit=1 00:12:57.616 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:57.616 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:57.616 data = bsize=4096 blocks=130560, imaxpct=25 00:12:57.616 = sunit=0 swidth=0 blks 00:12:57.616 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:57.616 log =internal log bsize=4096 blocks=16384, version=2 00:12:57.616 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:57.616 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:58.549 Discarding blocks...Done. 00:12:58.549 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:58.549 08:49:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 766292 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:01.082 00:13:01.082 real 0m3.188s 00:13:01.082 user 0m0.013s 00:13:01.082 sys 0m0.069s 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:01.082 ************************************ 00:13:01.082 END TEST filesystem_xfs 00:13:01.082 ************************************ 00:13:01.082 08:49:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.082 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 766292 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 766292 ']' 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 766292 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 766292 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 766292' 00:13:01.083 killing process with pid 766292 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 766292 00:13:01.083 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 766292 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:01.650 00:13:01.650 real 0m16.483s 00:13:01.650 user 1m3.758s 00:13:01.650 sys 0m2.166s 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.650 ************************************ 00:13:01.650 END TEST nvmf_filesystem_no_in_capsule 00:13:01.650 ************************************ 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:01.650 ************************************ 00:13:01.650 START TEST nvmf_filesystem_in_capsule 00:13:01.650 ************************************ 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=768506 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 768506 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 768506 ']' 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.650 08:49:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.650 [2024-11-06 08:49:14.811563] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:13:01.650 [2024-11-06 08:49:14.811644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.650 [2024-11-06 08:49:14.888561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.909 [2024-11-06 08:49:14.946026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.909 [2024-11-06 08:49:14.946079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.909 [2024-11-06 08:49:14.946094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.909 [2024-11-06 08:49:14.946121] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.909 [2024-11-06 08:49:14.946130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.909 [2024-11-06 08:49:14.947560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.909 [2024-11-06 08:49:14.947615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.909 [2024-11-06 08:49:14.947683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.909 [2024-11-06 08:49:14.947686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.909 [2024-11-06 08:49:15.088915] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.909 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.168 Malloc1 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.168 [2024-11-06 08:49:15.274455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:02.168 { 00:13:02.168 "name": "Malloc1", 00:13:02.168 "aliases": [ 00:13:02.168 "bc17bf25-e65e-48fa-9d3b-d6f66ba513f3" 00:13:02.168 ], 00:13:02.168 "product_name": "Malloc disk", 00:13:02.168 "block_size": 512, 00:13:02.168 "num_blocks": 1048576, 00:13:02.168 "uuid": "bc17bf25-e65e-48fa-9d3b-d6f66ba513f3", 00:13:02.168 "assigned_rate_limits": { 00:13:02.168 "rw_ios_per_sec": 0, 00:13:02.168 "rw_mbytes_per_sec": 0, 00:13:02.168 "r_mbytes_per_sec": 0, 00:13:02.168 "w_mbytes_per_sec": 0 00:13:02.168 }, 00:13:02.168 "claimed": true, 00:13:02.168 "claim_type": "exclusive_write", 00:13:02.168 "zoned": false, 00:13:02.168 "supported_io_types": { 00:13:02.168 "read": true, 00:13:02.168 "write": true, 00:13:02.168 "unmap": true, 00:13:02.168 "flush": true, 00:13:02.168 "reset": true, 00:13:02.168 "nvme_admin": false, 00:13:02.168 "nvme_io": false, 00:13:02.168 "nvme_io_md": false, 00:13:02.168 "write_zeroes": true, 00:13:02.168 "zcopy": true, 00:13:02.168 "get_zone_info": false, 00:13:02.168 "zone_management": false, 00:13:02.168 "zone_append": false, 00:13:02.168 "compare": false, 00:13:02.168 "compare_and_write": false, 00:13:02.168 "abort": true, 00:13:02.168 "seek_hole": false, 00:13:02.168 "seek_data": false, 00:13:02.168 "copy": true, 00:13:02.168 "nvme_iov_md": false 00:13:02.168 }, 00:13:02.168 "memory_domains": [ 00:13:02.168 { 00:13:02.168 "dma_device_id": "system", 00:13:02.168 "dma_device_type": 1 00:13:02.168 }, 00:13:02.168 { 00:13:02.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.168 "dma_device_type": 2 00:13:02.168 } 00:13:02.168 ], 00:13:02.168 "driver_specific": {} 00:13:02.168 } 00:13:02.168 ]' 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:02.168 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.734 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.734 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:02.734 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.734 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:02.734 08:49:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:05.261 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:05.261 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:05.261 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.261 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:05.261 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.261 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:05.261 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:05.261 08:49:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:05.261 08:49:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:06.194 08:49:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.132 ************************************ 00:13:07.132 START TEST filesystem_in_capsule_ext4 00:13:07.132 ************************************ 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:07.132 08:49:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:07.132 mke2fs 1.47.0 (5-Feb-2023) 00:13:07.132 Discarding device blocks: 0/522240 done 00:13:07.132 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:07.132 Filesystem UUID: 35ddcb7a-4208-40f8-8a45-e90b181d472a 00:13:07.132 Superblock backups stored on blocks: 00:13:07.132 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:07.132 00:13:07.132 Allocating group tables: 0/64 done 00:13:07.132 Writing inode tables: 0/64 done 00:13:09.656 Creating journal (8192 blocks): done 00:13:09.656 Writing superblocks and filesystem accounting information: 0/64 done 00:13:09.656 00:13:09.656 08:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:09.656 08:49:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 768506 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:16.335 00:13:16.335 real 0m8.427s 00:13:16.335 user 0m0.020s 00:13:16.335 sys 0m0.067s 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:16.335 ************************************ 00:13:16.335 END TEST filesystem_in_capsule_ext4 00:13:16.335 ************************************ 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.335 ************************************ 00:13:16.335 START TEST filesystem_in_capsule_btrfs 00:13:16.335 ************************************ 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:16.335 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:16.335 btrfs-progs v6.8.1 00:13:16.335 See https://btrfs.readthedocs.io for more information. 00:13:16.335 00:13:16.335 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:16.335 NOTE: several default settings have changed in version 5.15, please make sure 00:13:16.335 this does not affect your deployments: 00:13:16.335 - DUP for metadata (-m dup) 00:13:16.335 - enabled no-holes (-O no-holes) 00:13:16.336 - enabled free-space-tree (-R free-space-tree) 00:13:16.336 00:13:16.336 Label: (null) 00:13:16.336 UUID: 6d818cd2-1f69-4e9a-a67d-f6b1ebb6b0cb 00:13:16.336 Node size: 16384 00:13:16.336 Sector size: 4096 (CPU page size: 4096) 00:13:16.336 Filesystem size: 510.00MiB 00:13:16.336 Block group profiles: 00:13:16.336 Data: single 8.00MiB 00:13:16.336 Metadata: DUP 32.00MiB 00:13:16.336 System: DUP 8.00MiB 00:13:16.336 SSD detected: yes 00:13:16.336 Zoned device: no 00:13:16.336 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:16.336 Checksum: crc32c 00:13:16.336 Number of devices: 1 00:13:16.336 Devices: 00:13:16.336 ID SIZE PATH 00:13:16.336 1 510.00MiB /dev/nvme0n1p1 00:13:16.336 00:13:16.336 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:16.336 08:49:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 768506 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:16.336 00:13:16.336 real 0m0.549s 00:13:16.336 user 0m0.010s 00:13:16.336 sys 0m0.114s 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:16.336 ************************************ 00:13:16.336 END TEST filesystem_in_capsule_btrfs 00:13:16.336 ************************************ 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:16.336 ************************************ 00:13:16.336 START TEST filesystem_in_capsule_xfs 00:13:16.336 ************************************ 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:16.336 08:49:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:16.336 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:16.336 = sectsz=512 attr=2, projid32bit=1 00:13:16.336 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:16.336 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:16.336 data = bsize=4096 blocks=130560, imaxpct=25 00:13:16.336 = sunit=0 swidth=0 blks 00:13:16.336 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:16.336 log =internal log bsize=4096 blocks=16384, version=2 00:13:16.336 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:16.336 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:16.900 Discarding blocks...Done. 00:13:16.900 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:16.900 08:49:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 768506 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:19.430 00:13:19.430 real 0m3.180s 00:13:19.430 user 0m0.012s 00:13:19.430 sys 0m0.065s 00:13:19.430 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:19.431 ************************************ 00:13:19.431 END TEST filesystem_in_capsule_xfs 00:13:19.431 ************************************ 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 768506 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 768506 ']' 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 768506 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.431 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 768506 00:13:19.688 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:19.688 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:19.688 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 768506' 00:13:19.688 killing process with pid 768506 00:13:19.688 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 768506 00:13:19.688 08:49:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 768506 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:19.946 00:13:19.946 real 0m18.399s 00:13:19.946 user 1m11.295s 00:13:19.946 sys 0m2.258s 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:19.946 ************************************ 00:13:19.946 END TEST nvmf_filesystem_in_capsule 00:13:19.946 ************************************ 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:19.946 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:19.946 rmmod nvme_tcp 00:13:19.946 rmmod nvme_fabrics 00:13:19.946 rmmod nvme_keyring 00:13:20.205 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:20.205 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.206 08:49:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:22.115 00:13:22.115 real 0m39.717s 00:13:22.115 user 2m16.147s 00:13:22.115 sys 0m6.172s 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:22.115 ************************************ 00:13:22.115 END TEST nvmf_filesystem 00:13:22.115 ************************************ 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.115 ************************************ 00:13:22.115 START TEST nvmf_target_discovery 00:13:22.115 ************************************ 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:22.115 * Looking for test storage... 00:13:22.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # lcov --version 00:13:22.115 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:22.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.375 --rc genhtml_branch_coverage=1 00:13:22.375 --rc genhtml_function_coverage=1 00:13:22.375 --rc genhtml_legend=1 00:13:22.375 --rc geninfo_all_blocks=1 00:13:22.375 --rc geninfo_unexecuted_blocks=1 00:13:22.375 00:13:22.375 ' 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:22.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.375 --rc genhtml_branch_coverage=1 00:13:22.375 --rc genhtml_function_coverage=1 00:13:22.375 --rc genhtml_legend=1 00:13:22.375 --rc geninfo_all_blocks=1 00:13:22.375 --rc geninfo_unexecuted_blocks=1 00:13:22.375 00:13:22.375 ' 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:22.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.375 --rc genhtml_branch_coverage=1 00:13:22.375 --rc genhtml_function_coverage=1 00:13:22.375 --rc genhtml_legend=1 00:13:22.375 --rc geninfo_all_blocks=1 00:13:22.375 --rc geninfo_unexecuted_blocks=1 00:13:22.375 00:13:22.375 ' 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:22.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.375 --rc genhtml_branch_coverage=1 00:13:22.375 --rc genhtml_function_coverage=1 00:13:22.375 --rc genhtml_legend=1 00:13:22.375 --rc geninfo_all_blocks=1 00:13:22.375 --rc geninfo_unexecuted_blocks=1 00:13:22.375 00:13:22.375 ' 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.375 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:22.376 08:49:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.914 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:24.915 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:24.915 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:24.915 Found net devices under 0000:09:00.0: cvl_0_0 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:24.915 Found net devices under 0000:09:00.1: cvl_0_1 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:24.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:13:24.915 00:13:24.915 --- 10.0.0.2 ping statistics --- 00:13:24.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.915 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:24.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:13:24.915 00:13:24.915 --- 10.0.0.1 ping statistics --- 00:13:24.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.915 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=772809 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 772809 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 772809 ']' 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.915 08:49:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:24.915 [2024-11-06 08:49:37.938357] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:13:24.915 [2024-11-06 08:49:37.938435] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.915 [2024-11-06 08:49:38.010635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.915 [2024-11-06 08:49:38.069293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.915 [2024-11-06 08:49:38.069347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.915 [2024-11-06 08:49:38.069360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.915 [2024-11-06 08:49:38.069370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.915 [2024-11-06 08:49:38.069379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.915 [2024-11-06 08:49:38.070938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.915 [2024-11-06 08:49:38.070999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.915 [2024-11-06 08:49:38.071062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.915 [2024-11-06 08:49:38.071065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.915 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:24.915 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:24.915 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:24.915 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:24.915 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.174 [2024-11-06 08:49:38.224365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.174 Null1 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.174 [2024-11-06 08:49:38.282036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.174 Null2 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.174 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 Null3 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 Null4 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.175 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:13:25.434 00:13:25.434 Discovery Log Number of Records 6, Generation counter 6 00:13:25.434 =====Discovery Log Entry 0====== 00:13:25.434 trtype: tcp 00:13:25.434 adrfam: ipv4 00:13:25.434 subtype: current discovery subsystem 00:13:25.434 treq: not required 00:13:25.434 portid: 0 00:13:25.434 trsvcid: 4420 00:13:25.434 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:25.434 traddr: 10.0.0.2 00:13:25.434 eflags: explicit discovery connections, duplicate discovery information 00:13:25.434 sectype: none 00:13:25.434 =====Discovery Log Entry 1====== 00:13:25.434 trtype: tcp 00:13:25.434 adrfam: ipv4 00:13:25.434 subtype: nvme subsystem 00:13:25.434 treq: not required 00:13:25.434 portid: 0 00:13:25.434 trsvcid: 4420 00:13:25.434 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:25.434 traddr: 10.0.0.2 00:13:25.434 eflags: none 00:13:25.434 sectype: none 00:13:25.434 =====Discovery Log Entry 2====== 00:13:25.434 trtype: tcp 00:13:25.434 adrfam: ipv4 00:13:25.434 subtype: nvme subsystem 00:13:25.434 treq: not required 00:13:25.434 portid: 0 00:13:25.434 trsvcid: 4420 00:13:25.434 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:25.434 traddr: 10.0.0.2 00:13:25.434 eflags: none 00:13:25.434 sectype: none 00:13:25.434 =====Discovery Log Entry 3====== 00:13:25.434 trtype: tcp 00:13:25.434 adrfam: ipv4 00:13:25.434 subtype: nvme subsystem 00:13:25.434 treq: not required 00:13:25.434 portid: 0 00:13:25.434 trsvcid: 4420 00:13:25.434 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:25.434 traddr: 10.0.0.2 00:13:25.434 eflags: none 00:13:25.434 sectype: none 00:13:25.434 =====Discovery Log Entry 4====== 00:13:25.434 trtype: tcp 00:13:25.434 adrfam: ipv4 00:13:25.434 subtype: nvme subsystem 00:13:25.434 treq: not required 00:13:25.434 portid: 0 00:13:25.434 trsvcid: 4420 00:13:25.434 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:25.434 traddr: 10.0.0.2 00:13:25.434 eflags: none 00:13:25.434 sectype: none 00:13:25.434 =====Discovery Log Entry 5====== 00:13:25.434 trtype: tcp 00:13:25.434 adrfam: ipv4 00:13:25.434 subtype: discovery subsystem referral 00:13:25.434 treq: not required 00:13:25.434 portid: 0 00:13:25.434 trsvcid: 4430 00:13:25.434 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:25.434 traddr: 10.0.0.2 00:13:25.434 eflags: none 00:13:25.434 sectype: none 00:13:25.434 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:25.434 Perform nvmf subsystem discovery via RPC 00:13:25.434 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:25.434 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.434 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.434 [ 00:13:25.434 { 00:13:25.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:25.434 "subtype": "Discovery", 00:13:25.434 "listen_addresses": [ 00:13:25.434 { 00:13:25.434 "trtype": "TCP", 00:13:25.434 "adrfam": "IPv4", 00:13:25.434 "traddr": "10.0.0.2", 00:13:25.434 "trsvcid": "4420" 00:13:25.434 } 00:13:25.434 ], 00:13:25.434 "allow_any_host": true, 00:13:25.434 "hosts": [] 00:13:25.434 }, 00:13:25.434 { 00:13:25.434 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:25.434 "subtype": "NVMe", 00:13:25.434 "listen_addresses": [ 00:13:25.434 { 00:13:25.434 "trtype": "TCP", 00:13:25.434 "adrfam": "IPv4", 00:13:25.434 "traddr": "10.0.0.2", 00:13:25.434 "trsvcid": "4420" 00:13:25.434 } 00:13:25.434 ], 00:13:25.434 "allow_any_host": true, 00:13:25.434 "hosts": [], 00:13:25.434 "serial_number": "SPDK00000000000001", 00:13:25.434 "model_number": "SPDK bdev Controller", 00:13:25.434 "max_namespaces": 32, 00:13:25.434 "min_cntlid": 1, 00:13:25.434 "max_cntlid": 65519, 00:13:25.434 "namespaces": [ 00:13:25.434 { 00:13:25.434 "nsid": 1, 00:13:25.434 "bdev_name": "Null1", 00:13:25.434 "name": "Null1", 00:13:25.434 "nguid": "F96BE48D6804496685E79678AE46720E", 00:13:25.434 "uuid": "f96be48d-6804-4966-85e7-9678ae46720e" 00:13:25.434 } 00:13:25.434 ] 00:13:25.434 }, 00:13:25.434 { 00:13:25.434 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:25.434 "subtype": "NVMe", 00:13:25.434 "listen_addresses": [ 00:13:25.434 { 00:13:25.434 "trtype": "TCP", 00:13:25.434 "adrfam": "IPv4", 00:13:25.434 "traddr": "10.0.0.2", 00:13:25.434 "trsvcid": "4420" 00:13:25.434 } 00:13:25.434 ], 00:13:25.434 "allow_any_host": true, 00:13:25.434 "hosts": [], 00:13:25.434 "serial_number": "SPDK00000000000002", 00:13:25.434 "model_number": "SPDK bdev Controller", 00:13:25.434 "max_namespaces": 32, 00:13:25.434 "min_cntlid": 1, 00:13:25.434 "max_cntlid": 65519, 00:13:25.434 "namespaces": [ 00:13:25.434 { 00:13:25.434 "nsid": 1, 00:13:25.434 "bdev_name": "Null2", 00:13:25.434 "name": "Null2", 00:13:25.434 "nguid": "AAE3C4F8F6704BC0BD0174B459E62110", 00:13:25.434 "uuid": "aae3c4f8-f670-4bc0-bd01-74b459e62110" 00:13:25.434 } 00:13:25.434 ] 00:13:25.434 }, 00:13:25.434 { 00:13:25.434 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:25.434 "subtype": "NVMe", 00:13:25.434 "listen_addresses": [ 00:13:25.434 { 00:13:25.434 "trtype": "TCP", 00:13:25.434 "adrfam": "IPv4", 00:13:25.434 "traddr": "10.0.0.2", 00:13:25.434 "trsvcid": "4420" 00:13:25.434 } 00:13:25.434 ], 00:13:25.434 "allow_any_host": true, 00:13:25.434 "hosts": [], 00:13:25.434 "serial_number": "SPDK00000000000003", 00:13:25.434 "model_number": "SPDK bdev Controller", 00:13:25.434 "max_namespaces": 32, 00:13:25.434 "min_cntlid": 1, 00:13:25.434 "max_cntlid": 65519, 00:13:25.434 "namespaces": [ 00:13:25.434 { 00:13:25.434 "nsid": 1, 00:13:25.434 "bdev_name": "Null3", 00:13:25.434 "name": "Null3", 00:13:25.434 "nguid": "C7262D54F29040C0918FEF153F92EE72", 00:13:25.434 "uuid": "c7262d54-f290-40c0-918f-ef153f92ee72" 00:13:25.434 } 00:13:25.434 ] 00:13:25.434 }, 00:13:25.434 { 00:13:25.434 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:25.434 "subtype": "NVMe", 00:13:25.434 "listen_addresses": [ 00:13:25.434 { 00:13:25.434 "trtype": "TCP", 00:13:25.434 "adrfam": "IPv4", 00:13:25.434 "traddr": "10.0.0.2", 00:13:25.434 "trsvcid": "4420" 00:13:25.434 } 00:13:25.434 ], 00:13:25.435 "allow_any_host": true, 00:13:25.435 "hosts": [], 00:13:25.435 "serial_number": "SPDK00000000000004", 00:13:25.435 "model_number": "SPDK bdev Controller", 00:13:25.435 "max_namespaces": 32, 00:13:25.435 "min_cntlid": 1, 00:13:25.435 "max_cntlid": 65519, 00:13:25.435 "namespaces": [ 00:13:25.435 { 00:13:25.435 "nsid": 1, 00:13:25.435 "bdev_name": "Null4", 00:13:25.435 "name": "Null4", 00:13:25.435 "nguid": "2D9CB04D6178417FBDEFFCA62BE5AE72", 00:13:25.435 "uuid": "2d9cb04d-6178-417f-bdef-fca62be5ae72" 00:13:25.435 } 00:13:25.435 ] 00:13:25.435 } 00:13:25.435 ] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:25.435 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:25.694 rmmod nvme_tcp 00:13:25.694 rmmod nvme_fabrics 00:13:25.694 rmmod nvme_keyring 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:25.694 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 772809 ']' 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 772809 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 772809 ']' 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 772809 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 772809 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 772809' 00:13:25.695 killing process with pid 772809 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 772809 00:13:25.695 08:49:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 772809 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.044 08:49:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.953 00:13:27.953 real 0m5.770s 00:13:27.953 user 0m4.855s 00:13:27.953 sys 0m2.065s 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:27.953 ************************************ 00:13:27.953 END TEST nvmf_target_discovery 00:13:27.953 ************************************ 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.953 ************************************ 00:13:27.953 START TEST nvmf_referrals 00:13:27.953 ************************************ 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:27.953 * Looking for test storage... 00:13:27.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # lcov --version 00:13:27.953 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:28.212 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:28.212 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.213 --rc genhtml_branch_coverage=1 00:13:28.213 --rc genhtml_function_coverage=1 00:13:28.213 --rc genhtml_legend=1 00:13:28.213 --rc geninfo_all_blocks=1 00:13:28.213 --rc geninfo_unexecuted_blocks=1 00:13:28.213 00:13:28.213 ' 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.213 --rc genhtml_branch_coverage=1 00:13:28.213 --rc genhtml_function_coverage=1 00:13:28.213 --rc genhtml_legend=1 00:13:28.213 --rc geninfo_all_blocks=1 00:13:28.213 --rc geninfo_unexecuted_blocks=1 00:13:28.213 00:13:28.213 ' 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.213 --rc genhtml_branch_coverage=1 00:13:28.213 --rc genhtml_function_coverage=1 00:13:28.213 --rc genhtml_legend=1 00:13:28.213 --rc geninfo_all_blocks=1 00:13:28.213 --rc geninfo_unexecuted_blocks=1 00:13:28.213 00:13:28.213 ' 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:28.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.213 --rc genhtml_branch_coverage=1 00:13:28.213 --rc genhtml_function_coverage=1 00:13:28.213 --rc genhtml_legend=1 00:13:28.213 --rc geninfo_all_blocks=1 00:13:28.213 --rc geninfo_unexecuted_blocks=1 00:13:28.213 00:13:28.213 ' 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:28.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:28.213 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:28.214 08:49:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:30.116 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:30.116 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:30.116 Found net devices under 0000:09:00.0: cvl_0_0 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:30.116 Found net devices under 0000:09:00.1: cvl_0_1 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.116 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.375 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:30.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:13:30.376 00:13:30.376 --- 10.0.0.2 ping statistics --- 00:13:30.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.376 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:13:30.376 00:13:30.376 --- 10.0.0.1 ping statistics --- 00:13:30.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.376 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=774909 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 774909 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 774909 ']' 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:30.376 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.376 [2024-11-06 08:49:43.550447] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:13:30.376 [2024-11-06 08:49:43.550529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.376 [2024-11-06 08:49:43.622931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.635 [2024-11-06 08:49:43.682546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.635 [2024-11-06 08:49:43.682598] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.635 [2024-11-06 08:49:43.682612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.635 [2024-11-06 08:49:43.682623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.635 [2024-11-06 08:49:43.682633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.635 [2024-11-06 08:49:43.684246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.635 [2024-11-06 08:49:43.684314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.635 [2024-11-06 08:49:43.684337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.635 [2024-11-06 08:49:43.684355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 [2024-11-06 08:49:43.826912] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 [2024-11-06 08:49:43.849019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.635 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:30.892 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.892 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:30.892 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:30.892 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:30.892 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:30.892 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:30.892 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.892 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:30.892 08:49:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:31.150 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:31.408 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:31.409 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:31.409 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.409 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:31.409 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:31.409 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:31.409 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.667 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:31.924 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:31.924 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:31.924 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.924 08:49:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:31.924 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:32.182 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:32.182 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:32.182 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:32.182 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:32.182 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:32.182 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.440 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.440 rmmod nvme_tcp 00:13:32.440 rmmod nvme_fabrics 00:13:32.699 rmmod nvme_keyring 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 774909 ']' 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 774909 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 774909 ']' 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 774909 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 774909 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 774909' 00:13:32.699 killing process with pid 774909 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 774909 00:13:32.699 08:49:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 774909 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.959 08:49:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.868 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:34.868 00:13:34.868 real 0m6.908s 00:13:34.868 user 0m10.824s 00:13:34.868 sys 0m2.214s 00:13:34.868 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:34.868 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:34.868 ************************************ 00:13:34.868 END TEST nvmf_referrals 00:13:34.868 ************************************ 00:13:34.868 08:49:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:34.868 08:49:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:34.868 08:49:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.868 08:49:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:34.868 ************************************ 00:13:34.868 START TEST nvmf_connect_disconnect 00:13:34.868 ************************************ 00:13:34.868 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:35.128 * Looking for test storage... 00:13:35.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:35.128 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:35.128 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # lcov --version 00:13:35.128 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:35.128 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:35.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.129 --rc genhtml_branch_coverage=1 00:13:35.129 --rc genhtml_function_coverage=1 00:13:35.129 --rc genhtml_legend=1 00:13:35.129 --rc geninfo_all_blocks=1 00:13:35.129 --rc geninfo_unexecuted_blocks=1 00:13:35.129 00:13:35.129 ' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:35.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.129 --rc genhtml_branch_coverage=1 00:13:35.129 --rc genhtml_function_coverage=1 00:13:35.129 --rc genhtml_legend=1 00:13:35.129 --rc geninfo_all_blocks=1 00:13:35.129 --rc geninfo_unexecuted_blocks=1 00:13:35.129 00:13:35.129 ' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:35.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.129 --rc genhtml_branch_coverage=1 00:13:35.129 --rc genhtml_function_coverage=1 00:13:35.129 --rc genhtml_legend=1 00:13:35.129 --rc geninfo_all_blocks=1 00:13:35.129 --rc geninfo_unexecuted_blocks=1 00:13:35.129 00:13:35.129 ' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:35.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.129 --rc genhtml_branch_coverage=1 00:13:35.129 --rc genhtml_function_coverage=1 00:13:35.129 --rc genhtml_legend=1 00:13:35.129 --rc geninfo_all_blocks=1 00:13:35.129 --rc geninfo_unexecuted_blocks=1 00:13:35.129 00:13:35.129 ' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:35.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:35.129 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:35.130 08:49:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:37.674 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:37.674 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:37.674 Found net devices under 0000:09:00.0: cvl_0_0 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:37.674 Found net devices under 0000:09:00.1: cvl_0_1 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.674 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:13:37.675 00:13:37.675 --- 10.0.0.2 ping statistics --- 00:13:37.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.675 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:13:37.675 00:13:37.675 --- 10.0.0.1 ping statistics --- 00:13:37.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.675 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=777211 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 777211 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 777211 ']' 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.675 [2024-11-06 08:49:50.630398] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:13:37.675 [2024-11-06 08:49:50.630492] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.675 [2024-11-06 08:49:50.707464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.675 [2024-11-06 08:49:50.766221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.675 [2024-11-06 08:49:50.766277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.675 [2024-11-06 08:49:50.766291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.675 [2024-11-06 08:49:50.766303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.675 [2024-11-06 08:49:50.766319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.675 [2024-11-06 08:49:50.767923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.675 [2024-11-06 08:49:50.767984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.675 [2024-11-06 08:49:50.768048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.675 [2024-11-06 08:49:50.768051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.675 [2024-11-06 08:49:50.927670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.675 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:37.934 [2024-11-06 08:49:50.993482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:37.934 08:49:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:40.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.082 rmmod nvme_tcp 00:13:52.082 rmmod nvme_fabrics 00:13:52.082 rmmod nvme_keyring 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 777211 ']' 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 777211 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 777211 ']' 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 777211 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 777211 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 777211' 00:13:52.082 killing process with pid 777211 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 777211 00:13:52.082 08:50:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 777211 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.082 08:50:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:53.994 00:13:53.994 real 0m18.979s 00:13:53.994 user 0m56.793s 00:13:53.994 sys 0m3.447s 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:53.994 ************************************ 00:13:53.994 END TEST nvmf_connect_disconnect 00:13:53.994 ************************************ 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.994 ************************************ 00:13:53.994 START TEST nvmf_multitarget 00:13:53.994 ************************************ 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:53.994 * Looking for test storage... 00:13:53.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # lcov --version 00:13:53.994 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:13:54.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.253 --rc genhtml_branch_coverage=1 00:13:54.253 --rc genhtml_function_coverage=1 00:13:54.253 --rc genhtml_legend=1 00:13:54.253 --rc geninfo_all_blocks=1 00:13:54.253 --rc geninfo_unexecuted_blocks=1 00:13:54.253 00:13:54.253 ' 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:13:54.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.253 --rc genhtml_branch_coverage=1 00:13:54.253 --rc genhtml_function_coverage=1 00:13:54.253 --rc genhtml_legend=1 00:13:54.253 --rc geninfo_all_blocks=1 00:13:54.253 --rc geninfo_unexecuted_blocks=1 00:13:54.253 00:13:54.253 ' 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:13:54.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.253 --rc genhtml_branch_coverage=1 00:13:54.253 --rc genhtml_function_coverage=1 00:13:54.253 --rc genhtml_legend=1 00:13:54.253 --rc geninfo_all_blocks=1 00:13:54.253 --rc geninfo_unexecuted_blocks=1 00:13:54.253 00:13:54.253 ' 00:13:54.253 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:13:54.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.253 --rc genhtml_branch_coverage=1 00:13:54.253 --rc genhtml_function_coverage=1 00:13:54.253 --rc genhtml_legend=1 00:13:54.253 --rc geninfo_all_blocks=1 00:13:54.253 --rc geninfo_unexecuted_blocks=1 00:13:54.253 00:13:54.253 ' 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.254 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.254 08:50:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:56.156 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:56.157 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:56.157 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:56.157 Found net devices under 0000:09:00.0: cvl_0_0 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:56.157 Found net devices under 0000:09:00.1: cvl_0_1 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.157 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:56.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:13:56.416 00:13:56.416 --- 10.0.0.2 ping statistics --- 00:13:56.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.416 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:13:56.416 00:13:56.416 --- 10.0.0.1 ping statistics --- 00:13:56.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.416 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=780969 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 780969 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 780969 ']' 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.416 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:56.416 [2024-11-06 08:50:09.621764] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:13:56.416 [2024-11-06 08:50:09.621867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.416 [2024-11-06 08:50:09.694807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:56.673 [2024-11-06 08:50:09.754649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.673 [2024-11-06 08:50:09.754701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.673 [2024-11-06 08:50:09.754714] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.673 [2024-11-06 08:50:09.754725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.673 [2024-11-06 08:50:09.754735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.673 [2024-11-06 08:50:09.756317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.673 [2024-11-06 08:50:09.756383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.673 [2024-11-06 08:50:09.756451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:56.673 [2024-11-06 08:50:09.756454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.673 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.673 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:13:56.673 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:56.673 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.673 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:56.673 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.673 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:56.673 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:56.673 08:50:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:56.931 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:56.931 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:56.931 "nvmf_tgt_1" 00:13:56.931 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:57.188 "nvmf_tgt_2" 00:13:57.188 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:57.188 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:57.188 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:57.188 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:57.445 true 00:13:57.445 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:57.445 true 00:13:57.445 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:57.445 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:57.703 rmmod nvme_tcp 00:13:57.703 rmmod nvme_fabrics 00:13:57.703 rmmod nvme_keyring 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:57.703 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 780969 ']' 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 780969 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 780969 ']' 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 780969 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 780969 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 780969' 00:13:57.704 killing process with pid 780969 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 780969 00:13:57.704 08:50:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 780969 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.963 08:50:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.869 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:59.869 00:13:59.869 real 0m5.973s 00:13:59.869 user 0m6.939s 00:13:59.869 sys 0m2.045s 00:13:59.869 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.869 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:59.869 ************************************ 00:13:59.869 END TEST nvmf_multitarget 00:13:59.869 ************************************ 00:13:59.869 08:50:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:59.869 08:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:59.869 08:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.869 08:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.128 ************************************ 00:14:00.128 START TEST nvmf_rpc 00:14:00.128 ************************************ 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:00.128 * Looking for test storage... 00:14:00.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # lcov --version 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:00.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.128 --rc genhtml_branch_coverage=1 00:14:00.128 --rc genhtml_function_coverage=1 00:14:00.128 --rc genhtml_legend=1 00:14:00.128 --rc geninfo_all_blocks=1 00:14:00.128 --rc geninfo_unexecuted_blocks=1 00:14:00.128 00:14:00.128 ' 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:00.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.128 --rc genhtml_branch_coverage=1 00:14:00.128 --rc genhtml_function_coverage=1 00:14:00.128 --rc genhtml_legend=1 00:14:00.128 --rc geninfo_all_blocks=1 00:14:00.128 --rc geninfo_unexecuted_blocks=1 00:14:00.128 00:14:00.128 ' 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:00.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.128 --rc genhtml_branch_coverage=1 00:14:00.128 --rc genhtml_function_coverage=1 00:14:00.128 --rc genhtml_legend=1 00:14:00.128 --rc geninfo_all_blocks=1 00:14:00.128 --rc geninfo_unexecuted_blocks=1 00:14:00.128 00:14:00.128 ' 00:14:00.128 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:00.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.128 --rc genhtml_branch_coverage=1 00:14:00.128 --rc genhtml_function_coverage=1 00:14:00.128 --rc genhtml_legend=1 00:14:00.128 --rc geninfo_all_blocks=1 00:14:00.128 --rc geninfo_unexecuted_blocks=1 00:14:00.128 00:14:00.128 ' 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:00.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:00.129 08:50:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:02.666 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:02.666 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.666 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:02.666 Found net devices under 0000:09:00.0: cvl_0_0 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:02.667 Found net devices under 0000:09:00.1: cvl_0_1 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:02.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:14:02.667 00:14:02.667 --- 10.0.0.2 ping statistics --- 00:14:02.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.667 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:14:02.667 00:14:02.667 --- 10.0.0.1 ping statistics --- 00:14:02.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.667 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=783079 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 783079 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 783079 ']' 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.667 08:50:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.667 [2024-11-06 08:50:15.760432] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:14:02.667 [2024-11-06 08:50:15.760505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.667 [2024-11-06 08:50:15.830448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.667 [2024-11-06 08:50:15.886502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.667 [2024-11-06 08:50:15.886551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.667 [2024-11-06 08:50:15.886574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.667 [2024-11-06 08:50:15.886584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.667 [2024-11-06 08:50:15.886593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.667 [2024-11-06 08:50:15.888197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.667 [2024-11-06 08:50:15.888257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.667 [2024-11-06 08:50:15.888304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.667 [2024-11-06 08:50:15.888307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.925 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.925 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:02.925 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:02.925 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:02.926 "tick_rate": 2700000000, 00:14:02.926 "poll_groups": [ 00:14:02.926 { 00:14:02.926 "name": "nvmf_tgt_poll_group_000", 00:14:02.926 "admin_qpairs": 0, 00:14:02.926 "io_qpairs": 0, 00:14:02.926 "current_admin_qpairs": 0, 00:14:02.926 "current_io_qpairs": 0, 00:14:02.926 "pending_bdev_io": 0, 00:14:02.926 "completed_nvme_io": 0, 00:14:02.926 "transports": [] 00:14:02.926 }, 00:14:02.926 { 00:14:02.926 "name": "nvmf_tgt_poll_group_001", 00:14:02.926 "admin_qpairs": 0, 00:14:02.926 "io_qpairs": 0, 00:14:02.926 "current_admin_qpairs": 0, 00:14:02.926 "current_io_qpairs": 0, 00:14:02.926 "pending_bdev_io": 0, 00:14:02.926 "completed_nvme_io": 0, 00:14:02.926 "transports": [] 00:14:02.926 }, 00:14:02.926 { 00:14:02.926 "name": "nvmf_tgt_poll_group_002", 00:14:02.926 "admin_qpairs": 0, 00:14:02.926 "io_qpairs": 0, 00:14:02.926 "current_admin_qpairs": 0, 00:14:02.926 "current_io_qpairs": 0, 00:14:02.926 "pending_bdev_io": 0, 00:14:02.926 "completed_nvme_io": 0, 00:14:02.926 "transports": [] 00:14:02.926 }, 00:14:02.926 { 00:14:02.926 "name": "nvmf_tgt_poll_group_003", 00:14:02.926 "admin_qpairs": 0, 00:14:02.926 "io_qpairs": 0, 00:14:02.926 "current_admin_qpairs": 0, 00:14:02.926 "current_io_qpairs": 0, 00:14:02.926 "pending_bdev_io": 0, 00:14:02.926 "completed_nvme_io": 0, 00:14:02.926 "transports": [] 00:14:02.926 } 00:14:02.926 ] 00:14:02.926 }' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.926 [2024-11-06 08:50:16.135072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:02.926 "tick_rate": 2700000000, 00:14:02.926 "poll_groups": [ 00:14:02.926 { 00:14:02.926 "name": "nvmf_tgt_poll_group_000", 00:14:02.926 "admin_qpairs": 0, 00:14:02.926 "io_qpairs": 0, 00:14:02.926 "current_admin_qpairs": 0, 00:14:02.926 "current_io_qpairs": 0, 00:14:02.926 "pending_bdev_io": 0, 00:14:02.926 "completed_nvme_io": 0, 00:14:02.926 "transports": [ 00:14:02.926 { 00:14:02.926 "trtype": "TCP" 00:14:02.926 } 00:14:02.926 ] 00:14:02.926 }, 00:14:02.926 { 00:14:02.926 "name": "nvmf_tgt_poll_group_001", 00:14:02.926 "admin_qpairs": 0, 00:14:02.926 "io_qpairs": 0, 00:14:02.926 "current_admin_qpairs": 0, 00:14:02.926 "current_io_qpairs": 0, 00:14:02.926 "pending_bdev_io": 0, 00:14:02.926 "completed_nvme_io": 0, 00:14:02.926 "transports": [ 00:14:02.926 { 00:14:02.926 "trtype": "TCP" 00:14:02.926 } 00:14:02.926 ] 00:14:02.926 }, 00:14:02.926 { 00:14:02.926 "name": "nvmf_tgt_poll_group_002", 00:14:02.926 "admin_qpairs": 0, 00:14:02.926 "io_qpairs": 0, 00:14:02.926 "current_admin_qpairs": 0, 00:14:02.926 "current_io_qpairs": 0, 00:14:02.926 "pending_bdev_io": 0, 00:14:02.926 "completed_nvme_io": 0, 00:14:02.926 "transports": [ 00:14:02.926 { 00:14:02.926 "trtype": "TCP" 00:14:02.926 } 00:14:02.926 ] 00:14:02.926 }, 00:14:02.926 { 00:14:02.926 "name": "nvmf_tgt_poll_group_003", 00:14:02.926 "admin_qpairs": 0, 00:14:02.926 "io_qpairs": 0, 00:14:02.926 "current_admin_qpairs": 0, 00:14:02.926 "current_io_qpairs": 0, 00:14:02.926 "pending_bdev_io": 0, 00:14:02.926 "completed_nvme_io": 0, 00:14:02.926 "transports": [ 00:14:02.926 { 00:14:02.926 "trtype": "TCP" 00:14:02.926 } 00:14:02.926 ] 00:14:02.926 } 00:14:02.926 ] 00:14:02.926 }' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:02.926 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.185 Malloc1 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.185 [2024-11-06 08:50:16.315518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:03.185 [2024-11-06 08:50:16.338104] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:14:03.185 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:03.185 could not add new controller: failed to write to nvme-fabrics device 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.185 08:50:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.119 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.119 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:04.119 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.119 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:04.119 08:50:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.019 [2024-11-06 08:50:19.218803] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:14:06.019 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:06.019 could not add new controller: failed to write to nvme-fabrics device 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.019 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:06.953 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:06.953 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:06.953 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:06.953 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:06.953 08:50:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:08.852 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:08.852 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:08.852 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:08.852 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:08.852 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:08.852 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:08.852 08:50:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:08.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.852 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.852 [2024-11-06 08:50:22.051536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.853 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:09.419 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:09.419 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:09.419 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:09.419 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:09.419 08:50:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.946 [2024-11-06 08:50:24.842795] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.946 08:50:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:12.204 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.204 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:12.204 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.204 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:12.204 08:50:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 [2024-11-06 08:50:27.635640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.732 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.733 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.733 08:50:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:14.991 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.991 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:14.991 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.991 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:14.991 08:50:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.518 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.519 [2024-11-06 08:50:30.420016] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.519 08:50:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.084 08:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.084 08:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:18.084 08:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.084 08:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:18.084 08:50:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.983 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.241 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.241 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:20.241 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:20.241 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.241 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.241 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.242 [2024-11-06 08:50:33.287559] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.242 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.809 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.809 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.809 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.809 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:20.809 08:50:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.340 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 [2024-11-06 08:50:36.117210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 [2024-11-06 08:50:36.165276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 [2024-11-06 08:50:36.213399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 [2024-11-06 08:50:36.261566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:23.341 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 [2024-11-06 08:50:36.309720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:23.342 "tick_rate": 2700000000, 00:14:23.342 "poll_groups": [ 00:14:23.342 { 00:14:23.342 "name": "nvmf_tgt_poll_group_000", 00:14:23.342 "admin_qpairs": 2, 00:14:23.342 "io_qpairs": 84, 00:14:23.342 "current_admin_qpairs": 0, 00:14:23.342 "current_io_qpairs": 0, 00:14:23.342 "pending_bdev_io": 0, 00:14:23.342 "completed_nvme_io": 245, 00:14:23.342 "transports": [ 00:14:23.342 { 00:14:23.342 "trtype": "TCP" 00:14:23.342 } 00:14:23.342 ] 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "name": "nvmf_tgt_poll_group_001", 00:14:23.342 "admin_qpairs": 2, 00:14:23.342 "io_qpairs": 84, 00:14:23.342 "current_admin_qpairs": 0, 00:14:23.342 "current_io_qpairs": 0, 00:14:23.342 "pending_bdev_io": 0, 00:14:23.342 "completed_nvme_io": 122, 00:14:23.342 "transports": [ 00:14:23.342 { 00:14:23.342 "trtype": "TCP" 00:14:23.342 } 00:14:23.342 ] 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "name": "nvmf_tgt_poll_group_002", 00:14:23.342 "admin_qpairs": 1, 00:14:23.342 "io_qpairs": 84, 00:14:23.342 "current_admin_qpairs": 0, 00:14:23.342 "current_io_qpairs": 0, 00:14:23.342 "pending_bdev_io": 0, 00:14:23.342 "completed_nvme_io": 184, 00:14:23.342 "transports": [ 00:14:23.342 { 00:14:23.342 "trtype": "TCP" 00:14:23.342 } 00:14:23.342 ] 00:14:23.342 }, 00:14:23.342 { 00:14:23.342 "name": "nvmf_tgt_poll_group_003", 00:14:23.342 "admin_qpairs": 2, 00:14:23.342 "io_qpairs": 84, 00:14:23.342 "current_admin_qpairs": 0, 00:14:23.342 "current_io_qpairs": 0, 00:14:23.342 "pending_bdev_io": 0, 00:14:23.342 "completed_nvme_io": 135, 00:14:23.342 "transports": [ 00:14:23.342 { 00:14:23.342 "trtype": "TCP" 00:14:23.342 } 00:14:23.342 ] 00:14:23.342 } 00:14:23.342 ] 00:14:23.342 }' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:23.342 rmmod nvme_tcp 00:14:23.342 rmmod nvme_fabrics 00:14:23.342 rmmod nvme_keyring 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 783079 ']' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 783079 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 783079 ']' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 783079 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 783079 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 783079' 00:14:23.342 killing process with pid 783079 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 783079 00:14:23.342 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 783079 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:23.603 08:50:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:26.152 00:14:26.152 real 0m25.679s 00:14:26.152 user 1m22.858s 00:14:26.152 sys 0m4.455s 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.152 ************************************ 00:14:26.152 END TEST nvmf_rpc 00:14:26.152 ************************************ 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:26.152 ************************************ 00:14:26.152 START TEST nvmf_invalid 00:14:26.152 ************************************ 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:26.152 * Looking for test storage... 00:14:26.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # lcov --version 00:14:26.152 08:50:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:26.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.152 --rc genhtml_branch_coverage=1 00:14:26.152 --rc genhtml_function_coverage=1 00:14:26.152 --rc genhtml_legend=1 00:14:26.152 --rc geninfo_all_blocks=1 00:14:26.152 --rc geninfo_unexecuted_blocks=1 00:14:26.152 00:14:26.152 ' 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:26.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.152 --rc genhtml_branch_coverage=1 00:14:26.152 --rc genhtml_function_coverage=1 00:14:26.152 --rc genhtml_legend=1 00:14:26.152 --rc geninfo_all_blocks=1 00:14:26.152 --rc geninfo_unexecuted_blocks=1 00:14:26.152 00:14:26.152 ' 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:26.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.152 --rc genhtml_branch_coverage=1 00:14:26.152 --rc genhtml_function_coverage=1 00:14:26.152 --rc genhtml_legend=1 00:14:26.152 --rc geninfo_all_blocks=1 00:14:26.152 --rc geninfo_unexecuted_blocks=1 00:14:26.152 00:14:26.152 ' 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:26.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:26.152 --rc genhtml_branch_coverage=1 00:14:26.152 --rc genhtml_function_coverage=1 00:14:26.152 --rc genhtml_legend=1 00:14:26.152 --rc geninfo_all_blocks=1 00:14:26.152 --rc geninfo_unexecuted_blocks=1 00:14:26.152 00:14:26.152 ' 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.152 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:26.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:26.153 08:50:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:28.159 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:28.159 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:28.159 Found net devices under 0000:09:00.0: cvl_0_0 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.159 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:28.160 Found net devices under 0000:09:00.1: cvl_0_1 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:28.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:14:28.160 00:14:28.160 --- 10.0.0.2 ping statistics --- 00:14:28.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.160 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:14:28.160 00:14:28.160 --- 10.0.0.1 ping statistics --- 00:14:28.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.160 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=787582 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 787582 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 787582 ']' 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.160 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:28.160 [2024-11-06 08:50:41.407807] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:14:28.160 [2024-11-06 08:50:41.407905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.418 [2024-11-06 08:50:41.483998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.418 [2024-11-06 08:50:41.540765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.418 [2024-11-06 08:50:41.540828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.418 [2024-11-06 08:50:41.540850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.418 [2024-11-06 08:50:41.540862] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.418 [2024-11-06 08:50:41.540872] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.418 [2024-11-06 08:50:41.542439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.418 [2024-11-06 08:50:41.542502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.418 [2024-11-06 08:50:41.542525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.418 [2024-11-06 08:50:41.542528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.418 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.418 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:14:28.418 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:28.418 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:28.418 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:28.418 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.418 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:28.418 08:50:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28110 00:14:28.984 [2024-11-06 08:50:41.996901] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:28.984 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:28.984 { 00:14:28.984 "nqn": "nqn.2016-06.io.spdk:cnode28110", 00:14:28.984 "tgt_name": "foobar", 00:14:28.984 "method": "nvmf_create_subsystem", 00:14:28.984 "req_id": 1 00:14:28.984 } 00:14:28.984 Got JSON-RPC error response 00:14:28.984 response: 00:14:28.984 { 00:14:28.984 "code": -32603, 00:14:28.984 "message": "Unable to find target foobar" 00:14:28.984 }' 00:14:28.984 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:28.984 { 00:14:28.984 "nqn": "nqn.2016-06.io.spdk:cnode28110", 00:14:28.984 "tgt_name": "foobar", 00:14:28.984 "method": "nvmf_create_subsystem", 00:14:28.984 "req_id": 1 00:14:28.984 } 00:14:28.984 Got JSON-RPC error response 00:14:28.984 response: 00:14:28.984 { 00:14:28.984 "code": -32603, 00:14:28.984 "message": "Unable to find target foobar" 00:14:28.984 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:28.984 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:28.984 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29314 00:14:29.243 [2024-11-06 08:50:42.273793] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29314: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:29.243 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:29.243 { 00:14:29.243 "nqn": "nqn.2016-06.io.spdk:cnode29314", 00:14:29.243 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:29.243 "method": "nvmf_create_subsystem", 00:14:29.243 "req_id": 1 00:14:29.243 } 00:14:29.243 Got JSON-RPC error response 00:14:29.243 response: 00:14:29.243 { 00:14:29.243 "code": -32602, 00:14:29.243 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:29.243 }' 00:14:29.243 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:29.243 { 00:14:29.243 "nqn": "nqn.2016-06.io.spdk:cnode29314", 00:14:29.243 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:29.243 "method": "nvmf_create_subsystem", 00:14:29.243 "req_id": 1 00:14:29.243 } 00:14:29.243 Got JSON-RPC error response 00:14:29.243 response: 00:14:29.243 { 00:14:29.243 "code": -32602, 00:14:29.243 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:29.243 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:29.243 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:29.243 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1515 00:14:29.502 [2024-11-06 08:50:42.558751] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1515: invalid model number 'SPDK_Controller' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:29.502 { 00:14:29.502 "nqn": "nqn.2016-06.io.spdk:cnode1515", 00:14:29.502 "model_number": "SPDK_Controller\u001f", 00:14:29.502 "method": "nvmf_create_subsystem", 00:14:29.502 "req_id": 1 00:14:29.502 } 00:14:29.502 Got JSON-RPC error response 00:14:29.502 response: 00:14:29.502 { 00:14:29.502 "code": -32602, 00:14:29.502 "message": "Invalid MN SPDK_Controller\u001f" 00:14:29.502 }' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:29.502 { 00:14:29.502 "nqn": "nqn.2016-06.io.spdk:cnode1515", 00:14:29.502 "model_number": "SPDK_Controller\u001f", 00:14:29.502 "method": "nvmf_create_subsystem", 00:14:29.502 "req_id": 1 00:14:29.502 } 00:14:29.502 Got JSON-RPC error response 00:14:29.502 response: 00:14:29.502 { 00:14:29.502 "code": -32602, 00:14:29.502 "message": "Invalid MN SPDK_Controller\u001f" 00:14:29.502 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.502 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '5:a+MU9OE5>p}.oe@ji"]' 00:14:29.503 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '5:a+MU9OE5>p}.oe@ji"]' nqn.2016-06.io.spdk:cnode14394 00:14:29.761 [2024-11-06 08:50:42.931979] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14394: invalid serial number '5:a+MU9OE5>p}.oe@ji"]' 00:14:29.761 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:29.761 { 00:14:29.761 "nqn": "nqn.2016-06.io.spdk:cnode14394", 00:14:29.761 "serial_number": "5:a+MU9OE5>p}.oe@ji\"]", 00:14:29.761 "method": "nvmf_create_subsystem", 00:14:29.761 "req_id": 1 00:14:29.761 } 00:14:29.761 Got JSON-RPC error response 00:14:29.761 response: 00:14:29.762 { 00:14:29.762 "code": -32602, 00:14:29.762 "message": "Invalid SN 5:a+MU9OE5>p}.oe@ji\"]" 00:14:29.762 }' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:29.762 { 00:14:29.762 "nqn": "nqn.2016-06.io.spdk:cnode14394", 00:14:29.762 "serial_number": "5:a+MU9OE5>p}.oe@ji\"]", 00:14:29.762 "method": "nvmf_create_subsystem", 00:14:29.762 "req_id": 1 00:14:29.762 } 00:14:29.762 Got JSON-RPC error response 00:14:29.762 response: 00:14:29.762 { 00:14:29.762 "code": -32602, 00:14:29.762 "message": "Invalid SN 5:a+MU9OE5>p}.oe@ji\"]" 00:14:29.762 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:29.762 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:29.762 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:29.762 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.762 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.762 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:29.762 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:29.762 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:29.762 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:29.763 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ V == \- ]] 00:14:30.022 08:50:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'VZ;l*ve<0,]%d,'\''sdrq0PB\h9u8>?.S$g?.S$g?.S$g?.S$g?.S$g?.S$g?.S$g /dev/null' 00:14:32.859 08:50:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.764 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:34.764 00:14:34.764 real 0m9.122s 00:14:34.764 user 0m21.882s 00:14:34.764 sys 0m2.495s 00:14:34.764 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.764 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:34.764 ************************************ 00:14:34.764 END TEST nvmf_invalid 00:14:34.764 ************************************ 00:14:34.764 08:50:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:34.764 08:50:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:34.764 08:50:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.764 08:50:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.022 ************************************ 00:14:35.022 START TEST nvmf_connect_stress 00:14:35.022 ************************************ 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:35.022 * Looking for test storage... 00:14:35.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:35.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.022 --rc genhtml_branch_coverage=1 00:14:35.022 --rc genhtml_function_coverage=1 00:14:35.022 --rc genhtml_legend=1 00:14:35.022 --rc geninfo_all_blocks=1 00:14:35.022 --rc geninfo_unexecuted_blocks=1 00:14:35.022 00:14:35.022 ' 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:35.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.022 --rc genhtml_branch_coverage=1 00:14:35.022 --rc genhtml_function_coverage=1 00:14:35.022 --rc genhtml_legend=1 00:14:35.022 --rc geninfo_all_blocks=1 00:14:35.022 --rc geninfo_unexecuted_blocks=1 00:14:35.022 00:14:35.022 ' 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:35.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.022 --rc genhtml_branch_coverage=1 00:14:35.022 --rc genhtml_function_coverage=1 00:14:35.022 --rc genhtml_legend=1 00:14:35.022 --rc geninfo_all_blocks=1 00:14:35.022 --rc geninfo_unexecuted_blocks=1 00:14:35.022 00:14:35.022 ' 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:35.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:35.022 --rc genhtml_branch_coverage=1 00:14:35.022 --rc genhtml_function_coverage=1 00:14:35.022 --rc genhtml_legend=1 00:14:35.022 --rc geninfo_all_blocks=1 00:14:35.022 --rc geninfo_unexecuted_blocks=1 00:14:35.022 00:14:35.022 ' 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.022 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:35.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:35.023 08:50:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:37.552 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:37.552 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:37.552 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:37.553 Found net devices under 0000:09:00.0: cvl_0_0 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:37.553 Found net devices under 0000:09:00.1: cvl_0_1 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:37.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:14:37.553 00:14:37.553 --- 10.0.0.2 ping statistics --- 00:14:37.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.553 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:14:37.553 00:14:37.553 --- 10.0.0.1 ping statistics --- 00:14:37.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.553 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=790342 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 790342 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 790342 ']' 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.553 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.553 [2024-11-06 08:50:50.615302] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:14:37.553 [2024-11-06 08:50:50.615382] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.553 [2024-11-06 08:50:50.687517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:37.553 [2024-11-06 08:50:50.744738] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:37.553 [2024-11-06 08:50:50.744794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:37.553 [2024-11-06 08:50:50.744807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:37.553 [2024-11-06 08:50:50.744818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:37.553 [2024-11-06 08:50:50.744828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:37.553 [2024-11-06 08:50:50.746269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.553 [2024-11-06 08:50:50.746333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:37.553 [2024-11-06 08:50:50.746336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.811 [2024-11-06 08:50:50.900468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.811 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.812 [2024-11-06 08:50:50.917643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.812 NULL1 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=790367 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.812 08:50:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.069 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.069 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:38.069 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.069 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.069 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.634 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.634 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:38.634 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.634 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.634 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.891 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.891 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:38.891 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.891 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.891 08:50:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.149 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.149 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:39.149 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.149 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.149 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.406 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.406 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:39.406 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.407 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.407 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.665 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.665 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:39.665 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.665 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.665 08:50:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.229 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.229 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:40.229 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.229 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.229 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.486 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.486 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:40.486 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.486 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.486 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.744 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.744 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:40.744 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.744 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.744 08:50:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.002 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.002 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:41.002 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.002 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.002 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.259 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.259 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:41.259 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.259 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.259 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.824 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.824 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:41.824 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.824 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.824 08:50:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.082 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.082 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:42.082 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.082 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.082 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.339 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.339 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:42.339 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.339 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.339 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.597 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.597 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:42.597 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.597 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.597 08:50:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.854 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.854 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:42.854 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.854 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.854 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.419 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.419 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:43.419 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.419 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.419 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.676 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.676 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:43.676 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.676 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.676 08:50:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.933 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.933 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:43.933 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.933 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.933 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.191 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.191 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:44.191 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.191 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.191 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.448 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.448 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:44.448 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.448 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.448 08:50:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.013 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.013 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:45.013 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.013 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.013 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.270 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.270 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:45.270 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.270 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.270 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.527 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.527 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:45.527 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.527 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.527 08:50:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.785 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.785 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:45.785 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.785 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.785 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.043 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.043 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:46.043 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.043 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.043 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.607 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.607 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:46.607 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.607 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.607 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.865 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.865 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:46.865 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.865 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.865 08:50:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.123 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.123 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:47.123 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.123 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.123 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.380 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.380 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:47.380 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.380 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.380 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.945 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.945 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:47.945 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:47.945 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.945 08:51:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.945 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 790367 00:14:48.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (790367) - No such process 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 790367 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:48.203 rmmod nvme_tcp 00:14:48.203 rmmod nvme_fabrics 00:14:48.203 rmmod nvme_keyring 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:48.203 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 790342 ']' 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 790342 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 790342 ']' 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 790342 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 790342 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 790342' 00:14:48.204 killing process with pid 790342 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 790342 00:14:48.204 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 790342 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.461 08:51:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.365 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:50.365 00:14:50.365 real 0m15.529s 00:14:50.365 user 0m38.388s 00:14:50.365 sys 0m6.051s 00:14:50.365 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.365 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.365 ************************************ 00:14:50.365 END TEST nvmf_connect_stress 00:14:50.365 ************************************ 00:14:50.365 08:51:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:50.365 08:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:50.365 08:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.365 08:51:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.627 ************************************ 00:14:50.627 START TEST nvmf_fused_ordering 00:14:50.627 ************************************ 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:50.627 * Looking for test storage... 00:14:50.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # lcov --version 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:50.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.627 --rc genhtml_branch_coverage=1 00:14:50.627 --rc genhtml_function_coverage=1 00:14:50.627 --rc genhtml_legend=1 00:14:50.627 --rc geninfo_all_blocks=1 00:14:50.627 --rc geninfo_unexecuted_blocks=1 00:14:50.627 00:14:50.627 ' 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:50.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.627 --rc genhtml_branch_coverage=1 00:14:50.627 --rc genhtml_function_coverage=1 00:14:50.627 --rc genhtml_legend=1 00:14:50.627 --rc geninfo_all_blocks=1 00:14:50.627 --rc geninfo_unexecuted_blocks=1 00:14:50.627 00:14:50.627 ' 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:50.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.627 --rc genhtml_branch_coverage=1 00:14:50.627 --rc genhtml_function_coverage=1 00:14:50.627 --rc genhtml_legend=1 00:14:50.627 --rc geninfo_all_blocks=1 00:14:50.627 --rc geninfo_unexecuted_blocks=1 00:14:50.627 00:14:50.627 ' 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:50.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.627 --rc genhtml_branch_coverage=1 00:14:50.627 --rc genhtml_function_coverage=1 00:14:50.627 --rc genhtml_legend=1 00:14:50.627 --rc geninfo_all_blocks=1 00:14:50.627 --rc geninfo_unexecuted_blocks=1 00:14:50.627 00:14:50.627 ' 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.627 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.628 08:51:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:53.207 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:53.207 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.207 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:53.207 Found net devices under 0000:09:00.0: cvl_0_0 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:53.208 Found net devices under 0000:09:00.1: cvl_0_1 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:53.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:14:53.208 00:14:53.208 --- 10.0.0.2 ping statistics --- 00:14:53.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.208 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:14:53.208 00:14:53.208 --- 10.0.0.1 ping statistics --- 00:14:53.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.208 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=793635 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 793635 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 793635 ']' 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.208 [2024-11-06 08:51:06.248027] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:14:53.208 [2024-11-06 08:51:06.248104] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.208 [2024-11-06 08:51:06.321024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.208 [2024-11-06 08:51:06.379059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.208 [2024-11-06 08:51:06.379111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.208 [2024-11-06 08:51:06.379125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.208 [2024-11-06 08:51:06.379136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.208 [2024-11-06 08:51:06.379153] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.208 [2024-11-06 08:51:06.379736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:53.208 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.466 [2024-11-06 08:51:06.522751] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.466 [2024-11-06 08:51:06.538983] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.466 NULL1 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.466 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:53.467 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.467 08:51:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:53.467 [2024-11-06 08:51:06.582779] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:14:53.467 [2024-11-06 08:51:06.582813] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793782 ] 00:14:53.724 Attached to nqn.2016-06.io.spdk:cnode1 00:14:53.724 Namespace ID: 1 size: 1GB 00:14:53.724 fused_ordering(0) 00:14:53.724 fused_ordering(1) 00:14:53.724 fused_ordering(2) 00:14:53.724 fused_ordering(3) 00:14:53.724 fused_ordering(4) 00:14:53.724 fused_ordering(5) 00:14:53.724 fused_ordering(6) 00:14:53.724 fused_ordering(7) 00:14:53.724 fused_ordering(8) 00:14:53.724 fused_ordering(9) 00:14:53.724 fused_ordering(10) 00:14:53.724 fused_ordering(11) 00:14:53.724 fused_ordering(12) 00:14:53.724 fused_ordering(13) 00:14:53.724 fused_ordering(14) 00:14:53.724 fused_ordering(15) 00:14:53.724 fused_ordering(16) 00:14:53.724 fused_ordering(17) 00:14:53.724 fused_ordering(18) 00:14:53.724 fused_ordering(19) 00:14:53.724 fused_ordering(20) 00:14:53.724 fused_ordering(21) 00:14:53.724 fused_ordering(22) 00:14:53.724 fused_ordering(23) 00:14:53.724 fused_ordering(24) 00:14:53.724 fused_ordering(25) 00:14:53.724 fused_ordering(26) 00:14:53.724 fused_ordering(27) 00:14:53.724 fused_ordering(28) 00:14:53.724 fused_ordering(29) 00:14:53.724 fused_ordering(30) 00:14:53.724 fused_ordering(31) 00:14:53.724 fused_ordering(32) 00:14:53.724 fused_ordering(33) 00:14:53.724 fused_ordering(34) 00:14:53.724 fused_ordering(35) 00:14:53.724 fused_ordering(36) 00:14:53.724 fused_ordering(37) 00:14:53.724 fused_ordering(38) 00:14:53.724 fused_ordering(39) 00:14:53.724 fused_ordering(40) 00:14:53.724 fused_ordering(41) 00:14:53.724 fused_ordering(42) 00:14:53.724 fused_ordering(43) 00:14:53.724 fused_ordering(44) 00:14:53.724 fused_ordering(45) 00:14:53.724 fused_ordering(46) 00:14:53.724 fused_ordering(47) 00:14:53.724 fused_ordering(48) 00:14:53.724 fused_ordering(49) 00:14:53.724 fused_ordering(50) 00:14:53.724 fused_ordering(51) 00:14:53.724 fused_ordering(52) 00:14:53.724 fused_ordering(53) 00:14:53.724 fused_ordering(54) 00:14:53.724 fused_ordering(55) 00:14:53.724 fused_ordering(56) 00:14:53.724 fused_ordering(57) 00:14:53.724 fused_ordering(58) 00:14:53.724 fused_ordering(59) 00:14:53.724 fused_ordering(60) 00:14:53.724 fused_ordering(61) 00:14:53.724 fused_ordering(62) 00:14:53.724 fused_ordering(63) 00:14:53.724 fused_ordering(64) 00:14:53.724 fused_ordering(65) 00:14:53.724 fused_ordering(66) 00:14:53.724 fused_ordering(67) 00:14:53.724 fused_ordering(68) 00:14:53.724 fused_ordering(69) 00:14:53.724 fused_ordering(70) 00:14:53.724 fused_ordering(71) 00:14:53.724 fused_ordering(72) 00:14:53.724 fused_ordering(73) 00:14:53.724 fused_ordering(74) 00:14:53.724 fused_ordering(75) 00:14:53.724 fused_ordering(76) 00:14:53.724 fused_ordering(77) 00:14:53.724 fused_ordering(78) 00:14:53.724 fused_ordering(79) 00:14:53.724 fused_ordering(80) 00:14:53.724 fused_ordering(81) 00:14:53.724 fused_ordering(82) 00:14:53.724 fused_ordering(83) 00:14:53.724 fused_ordering(84) 00:14:53.724 fused_ordering(85) 00:14:53.724 fused_ordering(86) 00:14:53.724 fused_ordering(87) 00:14:53.724 fused_ordering(88) 00:14:53.724 fused_ordering(89) 00:14:53.724 fused_ordering(90) 00:14:53.724 fused_ordering(91) 00:14:53.724 fused_ordering(92) 00:14:53.724 fused_ordering(93) 00:14:53.724 fused_ordering(94) 00:14:53.724 fused_ordering(95) 00:14:53.724 fused_ordering(96) 00:14:53.724 fused_ordering(97) 00:14:53.724 fused_ordering(98) 00:14:53.724 fused_ordering(99) 00:14:53.724 fused_ordering(100) 00:14:53.724 fused_ordering(101) 00:14:53.724 fused_ordering(102) 00:14:53.724 fused_ordering(103) 00:14:53.724 fused_ordering(104) 00:14:53.724 fused_ordering(105) 00:14:53.724 fused_ordering(106) 00:14:53.724 fused_ordering(107) 00:14:53.724 fused_ordering(108) 00:14:53.724 fused_ordering(109) 00:14:53.724 fused_ordering(110) 00:14:53.725 fused_ordering(111) 00:14:53.725 fused_ordering(112) 00:14:53.725 fused_ordering(113) 00:14:53.725 fused_ordering(114) 00:14:53.725 fused_ordering(115) 00:14:53.725 fused_ordering(116) 00:14:53.725 fused_ordering(117) 00:14:53.725 fused_ordering(118) 00:14:53.725 fused_ordering(119) 00:14:53.725 fused_ordering(120) 00:14:53.725 fused_ordering(121) 00:14:53.725 fused_ordering(122) 00:14:53.725 fused_ordering(123) 00:14:53.725 fused_ordering(124) 00:14:53.725 fused_ordering(125) 00:14:53.725 fused_ordering(126) 00:14:53.725 fused_ordering(127) 00:14:53.725 fused_ordering(128) 00:14:53.725 fused_ordering(129) 00:14:53.725 fused_ordering(130) 00:14:53.725 fused_ordering(131) 00:14:53.725 fused_ordering(132) 00:14:53.725 fused_ordering(133) 00:14:53.725 fused_ordering(134) 00:14:53.725 fused_ordering(135) 00:14:53.725 fused_ordering(136) 00:14:53.725 fused_ordering(137) 00:14:53.725 fused_ordering(138) 00:14:53.725 fused_ordering(139) 00:14:53.725 fused_ordering(140) 00:14:53.725 fused_ordering(141) 00:14:53.725 fused_ordering(142) 00:14:53.725 fused_ordering(143) 00:14:53.725 fused_ordering(144) 00:14:53.725 fused_ordering(145) 00:14:53.725 fused_ordering(146) 00:14:53.725 fused_ordering(147) 00:14:53.725 fused_ordering(148) 00:14:53.725 fused_ordering(149) 00:14:53.725 fused_ordering(150) 00:14:53.725 fused_ordering(151) 00:14:53.725 fused_ordering(152) 00:14:53.725 fused_ordering(153) 00:14:53.725 fused_ordering(154) 00:14:53.725 fused_ordering(155) 00:14:53.725 fused_ordering(156) 00:14:53.725 fused_ordering(157) 00:14:53.725 fused_ordering(158) 00:14:53.725 fused_ordering(159) 00:14:53.725 fused_ordering(160) 00:14:53.725 fused_ordering(161) 00:14:53.725 fused_ordering(162) 00:14:53.725 fused_ordering(163) 00:14:53.725 fused_ordering(164) 00:14:53.725 fused_ordering(165) 00:14:53.725 fused_ordering(166) 00:14:53.725 fused_ordering(167) 00:14:53.725 fused_ordering(168) 00:14:53.725 fused_ordering(169) 00:14:53.725 fused_ordering(170) 00:14:53.725 fused_ordering(171) 00:14:53.725 fused_ordering(172) 00:14:53.725 fused_ordering(173) 00:14:53.725 fused_ordering(174) 00:14:53.725 fused_ordering(175) 00:14:53.725 fused_ordering(176) 00:14:53.725 fused_ordering(177) 00:14:53.725 fused_ordering(178) 00:14:53.725 fused_ordering(179) 00:14:53.725 fused_ordering(180) 00:14:53.725 fused_ordering(181) 00:14:53.725 fused_ordering(182) 00:14:53.725 fused_ordering(183) 00:14:53.725 fused_ordering(184) 00:14:53.725 fused_ordering(185) 00:14:53.725 fused_ordering(186) 00:14:53.725 fused_ordering(187) 00:14:53.725 fused_ordering(188) 00:14:53.725 fused_ordering(189) 00:14:53.725 fused_ordering(190) 00:14:53.725 fused_ordering(191) 00:14:53.725 fused_ordering(192) 00:14:53.725 fused_ordering(193) 00:14:53.725 fused_ordering(194) 00:14:53.725 fused_ordering(195) 00:14:53.725 fused_ordering(196) 00:14:53.725 fused_ordering(197) 00:14:53.725 fused_ordering(198) 00:14:53.725 fused_ordering(199) 00:14:53.725 fused_ordering(200) 00:14:53.725 fused_ordering(201) 00:14:53.725 fused_ordering(202) 00:14:53.725 fused_ordering(203) 00:14:53.725 fused_ordering(204) 00:14:53.725 fused_ordering(205) 00:14:54.290 fused_ordering(206) 00:14:54.290 fused_ordering(207) 00:14:54.290 fused_ordering(208) 00:14:54.290 fused_ordering(209) 00:14:54.290 fused_ordering(210) 00:14:54.290 fused_ordering(211) 00:14:54.290 fused_ordering(212) 00:14:54.290 fused_ordering(213) 00:14:54.290 fused_ordering(214) 00:14:54.290 fused_ordering(215) 00:14:54.290 fused_ordering(216) 00:14:54.290 fused_ordering(217) 00:14:54.290 fused_ordering(218) 00:14:54.290 fused_ordering(219) 00:14:54.290 fused_ordering(220) 00:14:54.290 fused_ordering(221) 00:14:54.290 fused_ordering(222) 00:14:54.290 fused_ordering(223) 00:14:54.290 fused_ordering(224) 00:14:54.290 fused_ordering(225) 00:14:54.290 fused_ordering(226) 00:14:54.290 fused_ordering(227) 00:14:54.290 fused_ordering(228) 00:14:54.290 fused_ordering(229) 00:14:54.290 fused_ordering(230) 00:14:54.290 fused_ordering(231) 00:14:54.290 fused_ordering(232) 00:14:54.290 fused_ordering(233) 00:14:54.290 fused_ordering(234) 00:14:54.290 fused_ordering(235) 00:14:54.290 fused_ordering(236) 00:14:54.290 fused_ordering(237) 00:14:54.290 fused_ordering(238) 00:14:54.290 fused_ordering(239) 00:14:54.290 fused_ordering(240) 00:14:54.290 fused_ordering(241) 00:14:54.290 fused_ordering(242) 00:14:54.290 fused_ordering(243) 00:14:54.290 fused_ordering(244) 00:14:54.290 fused_ordering(245) 00:14:54.290 fused_ordering(246) 00:14:54.290 fused_ordering(247) 00:14:54.290 fused_ordering(248) 00:14:54.290 fused_ordering(249) 00:14:54.290 fused_ordering(250) 00:14:54.290 fused_ordering(251) 00:14:54.290 fused_ordering(252) 00:14:54.290 fused_ordering(253) 00:14:54.290 fused_ordering(254) 00:14:54.290 fused_ordering(255) 00:14:54.290 fused_ordering(256) 00:14:54.290 fused_ordering(257) 00:14:54.290 fused_ordering(258) 00:14:54.290 fused_ordering(259) 00:14:54.290 fused_ordering(260) 00:14:54.290 fused_ordering(261) 00:14:54.290 fused_ordering(262) 00:14:54.290 fused_ordering(263) 00:14:54.290 fused_ordering(264) 00:14:54.290 fused_ordering(265) 00:14:54.290 fused_ordering(266) 00:14:54.290 fused_ordering(267) 00:14:54.290 fused_ordering(268) 00:14:54.290 fused_ordering(269) 00:14:54.290 fused_ordering(270) 00:14:54.290 fused_ordering(271) 00:14:54.290 fused_ordering(272) 00:14:54.290 fused_ordering(273) 00:14:54.290 fused_ordering(274) 00:14:54.290 fused_ordering(275) 00:14:54.290 fused_ordering(276) 00:14:54.290 fused_ordering(277) 00:14:54.290 fused_ordering(278) 00:14:54.290 fused_ordering(279) 00:14:54.290 fused_ordering(280) 00:14:54.290 fused_ordering(281) 00:14:54.290 fused_ordering(282) 00:14:54.290 fused_ordering(283) 00:14:54.290 fused_ordering(284) 00:14:54.290 fused_ordering(285) 00:14:54.290 fused_ordering(286) 00:14:54.290 fused_ordering(287) 00:14:54.290 fused_ordering(288) 00:14:54.290 fused_ordering(289) 00:14:54.290 fused_ordering(290) 00:14:54.290 fused_ordering(291) 00:14:54.290 fused_ordering(292) 00:14:54.290 fused_ordering(293) 00:14:54.290 fused_ordering(294) 00:14:54.290 fused_ordering(295) 00:14:54.290 fused_ordering(296) 00:14:54.290 fused_ordering(297) 00:14:54.290 fused_ordering(298) 00:14:54.290 fused_ordering(299) 00:14:54.290 fused_ordering(300) 00:14:54.290 fused_ordering(301) 00:14:54.290 fused_ordering(302) 00:14:54.290 fused_ordering(303) 00:14:54.290 fused_ordering(304) 00:14:54.290 fused_ordering(305) 00:14:54.290 fused_ordering(306) 00:14:54.290 fused_ordering(307) 00:14:54.290 fused_ordering(308) 00:14:54.290 fused_ordering(309) 00:14:54.290 fused_ordering(310) 00:14:54.290 fused_ordering(311) 00:14:54.290 fused_ordering(312) 00:14:54.290 fused_ordering(313) 00:14:54.290 fused_ordering(314) 00:14:54.290 fused_ordering(315) 00:14:54.290 fused_ordering(316) 00:14:54.290 fused_ordering(317) 00:14:54.290 fused_ordering(318) 00:14:54.290 fused_ordering(319) 00:14:54.290 fused_ordering(320) 00:14:54.290 fused_ordering(321) 00:14:54.290 fused_ordering(322) 00:14:54.290 fused_ordering(323) 00:14:54.290 fused_ordering(324) 00:14:54.290 fused_ordering(325) 00:14:54.290 fused_ordering(326) 00:14:54.290 fused_ordering(327) 00:14:54.290 fused_ordering(328) 00:14:54.290 fused_ordering(329) 00:14:54.290 fused_ordering(330) 00:14:54.290 fused_ordering(331) 00:14:54.290 fused_ordering(332) 00:14:54.290 fused_ordering(333) 00:14:54.290 fused_ordering(334) 00:14:54.290 fused_ordering(335) 00:14:54.290 fused_ordering(336) 00:14:54.290 fused_ordering(337) 00:14:54.290 fused_ordering(338) 00:14:54.290 fused_ordering(339) 00:14:54.290 fused_ordering(340) 00:14:54.290 fused_ordering(341) 00:14:54.290 fused_ordering(342) 00:14:54.290 fused_ordering(343) 00:14:54.290 fused_ordering(344) 00:14:54.290 fused_ordering(345) 00:14:54.290 fused_ordering(346) 00:14:54.290 fused_ordering(347) 00:14:54.290 fused_ordering(348) 00:14:54.290 fused_ordering(349) 00:14:54.290 fused_ordering(350) 00:14:54.290 fused_ordering(351) 00:14:54.290 fused_ordering(352) 00:14:54.290 fused_ordering(353) 00:14:54.290 fused_ordering(354) 00:14:54.290 fused_ordering(355) 00:14:54.290 fused_ordering(356) 00:14:54.290 fused_ordering(357) 00:14:54.290 fused_ordering(358) 00:14:54.290 fused_ordering(359) 00:14:54.290 fused_ordering(360) 00:14:54.290 fused_ordering(361) 00:14:54.290 fused_ordering(362) 00:14:54.290 fused_ordering(363) 00:14:54.290 fused_ordering(364) 00:14:54.290 fused_ordering(365) 00:14:54.290 fused_ordering(366) 00:14:54.290 fused_ordering(367) 00:14:54.290 fused_ordering(368) 00:14:54.290 fused_ordering(369) 00:14:54.290 fused_ordering(370) 00:14:54.290 fused_ordering(371) 00:14:54.290 fused_ordering(372) 00:14:54.290 fused_ordering(373) 00:14:54.290 fused_ordering(374) 00:14:54.290 fused_ordering(375) 00:14:54.290 fused_ordering(376) 00:14:54.290 fused_ordering(377) 00:14:54.290 fused_ordering(378) 00:14:54.290 fused_ordering(379) 00:14:54.290 fused_ordering(380) 00:14:54.290 fused_ordering(381) 00:14:54.290 fused_ordering(382) 00:14:54.290 fused_ordering(383) 00:14:54.290 fused_ordering(384) 00:14:54.290 fused_ordering(385) 00:14:54.291 fused_ordering(386) 00:14:54.291 fused_ordering(387) 00:14:54.291 fused_ordering(388) 00:14:54.291 fused_ordering(389) 00:14:54.291 fused_ordering(390) 00:14:54.291 fused_ordering(391) 00:14:54.291 fused_ordering(392) 00:14:54.291 fused_ordering(393) 00:14:54.291 fused_ordering(394) 00:14:54.291 fused_ordering(395) 00:14:54.291 fused_ordering(396) 00:14:54.291 fused_ordering(397) 00:14:54.291 fused_ordering(398) 00:14:54.291 fused_ordering(399) 00:14:54.291 fused_ordering(400) 00:14:54.291 fused_ordering(401) 00:14:54.291 fused_ordering(402) 00:14:54.291 fused_ordering(403) 00:14:54.291 fused_ordering(404) 00:14:54.291 fused_ordering(405) 00:14:54.291 fused_ordering(406) 00:14:54.291 fused_ordering(407) 00:14:54.291 fused_ordering(408) 00:14:54.291 fused_ordering(409) 00:14:54.291 fused_ordering(410) 00:14:54.548 fused_ordering(411) 00:14:54.548 fused_ordering(412) 00:14:54.548 fused_ordering(413) 00:14:54.548 fused_ordering(414) 00:14:54.548 fused_ordering(415) 00:14:54.548 fused_ordering(416) 00:14:54.548 fused_ordering(417) 00:14:54.548 fused_ordering(418) 00:14:54.548 fused_ordering(419) 00:14:54.548 fused_ordering(420) 00:14:54.548 fused_ordering(421) 00:14:54.548 fused_ordering(422) 00:14:54.548 fused_ordering(423) 00:14:54.548 fused_ordering(424) 00:14:54.548 fused_ordering(425) 00:14:54.548 fused_ordering(426) 00:14:54.548 fused_ordering(427) 00:14:54.548 fused_ordering(428) 00:14:54.548 fused_ordering(429) 00:14:54.548 fused_ordering(430) 00:14:54.548 fused_ordering(431) 00:14:54.548 fused_ordering(432) 00:14:54.548 fused_ordering(433) 00:14:54.548 fused_ordering(434) 00:14:54.548 fused_ordering(435) 00:14:54.548 fused_ordering(436) 00:14:54.548 fused_ordering(437) 00:14:54.548 fused_ordering(438) 00:14:54.548 fused_ordering(439) 00:14:54.548 fused_ordering(440) 00:14:54.548 fused_ordering(441) 00:14:54.548 fused_ordering(442) 00:14:54.548 fused_ordering(443) 00:14:54.548 fused_ordering(444) 00:14:54.548 fused_ordering(445) 00:14:54.548 fused_ordering(446) 00:14:54.548 fused_ordering(447) 00:14:54.548 fused_ordering(448) 00:14:54.548 fused_ordering(449) 00:14:54.548 fused_ordering(450) 00:14:54.548 fused_ordering(451) 00:14:54.549 fused_ordering(452) 00:14:54.549 fused_ordering(453) 00:14:54.549 fused_ordering(454) 00:14:54.549 fused_ordering(455) 00:14:54.549 fused_ordering(456) 00:14:54.549 fused_ordering(457) 00:14:54.549 fused_ordering(458) 00:14:54.549 fused_ordering(459) 00:14:54.549 fused_ordering(460) 00:14:54.549 fused_ordering(461) 00:14:54.549 fused_ordering(462) 00:14:54.549 fused_ordering(463) 00:14:54.549 fused_ordering(464) 00:14:54.549 fused_ordering(465) 00:14:54.549 fused_ordering(466) 00:14:54.549 fused_ordering(467) 00:14:54.549 fused_ordering(468) 00:14:54.549 fused_ordering(469) 00:14:54.549 fused_ordering(470) 00:14:54.549 fused_ordering(471) 00:14:54.549 fused_ordering(472) 00:14:54.549 fused_ordering(473) 00:14:54.549 fused_ordering(474) 00:14:54.549 fused_ordering(475) 00:14:54.549 fused_ordering(476) 00:14:54.549 fused_ordering(477) 00:14:54.549 fused_ordering(478) 00:14:54.549 fused_ordering(479) 00:14:54.549 fused_ordering(480) 00:14:54.549 fused_ordering(481) 00:14:54.549 fused_ordering(482) 00:14:54.549 fused_ordering(483) 00:14:54.549 fused_ordering(484) 00:14:54.549 fused_ordering(485) 00:14:54.549 fused_ordering(486) 00:14:54.549 fused_ordering(487) 00:14:54.549 fused_ordering(488) 00:14:54.549 fused_ordering(489) 00:14:54.549 fused_ordering(490) 00:14:54.549 fused_ordering(491) 00:14:54.549 fused_ordering(492) 00:14:54.549 fused_ordering(493) 00:14:54.549 fused_ordering(494) 00:14:54.549 fused_ordering(495) 00:14:54.549 fused_ordering(496) 00:14:54.549 fused_ordering(497) 00:14:54.549 fused_ordering(498) 00:14:54.549 fused_ordering(499) 00:14:54.549 fused_ordering(500) 00:14:54.549 fused_ordering(501) 00:14:54.549 fused_ordering(502) 00:14:54.549 fused_ordering(503) 00:14:54.549 fused_ordering(504) 00:14:54.549 fused_ordering(505) 00:14:54.549 fused_ordering(506) 00:14:54.549 fused_ordering(507) 00:14:54.549 fused_ordering(508) 00:14:54.549 fused_ordering(509) 00:14:54.549 fused_ordering(510) 00:14:54.549 fused_ordering(511) 00:14:54.549 fused_ordering(512) 00:14:54.549 fused_ordering(513) 00:14:54.549 fused_ordering(514) 00:14:54.549 fused_ordering(515) 00:14:54.549 fused_ordering(516) 00:14:54.549 fused_ordering(517) 00:14:54.549 fused_ordering(518) 00:14:54.549 fused_ordering(519) 00:14:54.549 fused_ordering(520) 00:14:54.549 fused_ordering(521) 00:14:54.549 fused_ordering(522) 00:14:54.549 fused_ordering(523) 00:14:54.549 fused_ordering(524) 00:14:54.549 fused_ordering(525) 00:14:54.549 fused_ordering(526) 00:14:54.549 fused_ordering(527) 00:14:54.549 fused_ordering(528) 00:14:54.549 fused_ordering(529) 00:14:54.549 fused_ordering(530) 00:14:54.549 fused_ordering(531) 00:14:54.549 fused_ordering(532) 00:14:54.549 fused_ordering(533) 00:14:54.549 fused_ordering(534) 00:14:54.549 fused_ordering(535) 00:14:54.549 fused_ordering(536) 00:14:54.549 fused_ordering(537) 00:14:54.549 fused_ordering(538) 00:14:54.549 fused_ordering(539) 00:14:54.549 fused_ordering(540) 00:14:54.549 fused_ordering(541) 00:14:54.549 fused_ordering(542) 00:14:54.549 fused_ordering(543) 00:14:54.549 fused_ordering(544) 00:14:54.549 fused_ordering(545) 00:14:54.549 fused_ordering(546) 00:14:54.549 fused_ordering(547) 00:14:54.549 fused_ordering(548) 00:14:54.549 fused_ordering(549) 00:14:54.549 fused_ordering(550) 00:14:54.549 fused_ordering(551) 00:14:54.549 fused_ordering(552) 00:14:54.549 fused_ordering(553) 00:14:54.549 fused_ordering(554) 00:14:54.549 fused_ordering(555) 00:14:54.549 fused_ordering(556) 00:14:54.549 fused_ordering(557) 00:14:54.549 fused_ordering(558) 00:14:54.549 fused_ordering(559) 00:14:54.549 fused_ordering(560) 00:14:54.549 fused_ordering(561) 00:14:54.549 fused_ordering(562) 00:14:54.549 fused_ordering(563) 00:14:54.549 fused_ordering(564) 00:14:54.549 fused_ordering(565) 00:14:54.549 fused_ordering(566) 00:14:54.549 fused_ordering(567) 00:14:54.549 fused_ordering(568) 00:14:54.549 fused_ordering(569) 00:14:54.549 fused_ordering(570) 00:14:54.549 fused_ordering(571) 00:14:54.549 fused_ordering(572) 00:14:54.549 fused_ordering(573) 00:14:54.549 fused_ordering(574) 00:14:54.549 fused_ordering(575) 00:14:54.549 fused_ordering(576) 00:14:54.549 fused_ordering(577) 00:14:54.549 fused_ordering(578) 00:14:54.549 fused_ordering(579) 00:14:54.549 fused_ordering(580) 00:14:54.549 fused_ordering(581) 00:14:54.549 fused_ordering(582) 00:14:54.549 fused_ordering(583) 00:14:54.549 fused_ordering(584) 00:14:54.549 fused_ordering(585) 00:14:54.549 fused_ordering(586) 00:14:54.549 fused_ordering(587) 00:14:54.549 fused_ordering(588) 00:14:54.549 fused_ordering(589) 00:14:54.549 fused_ordering(590) 00:14:54.549 fused_ordering(591) 00:14:54.549 fused_ordering(592) 00:14:54.549 fused_ordering(593) 00:14:54.549 fused_ordering(594) 00:14:54.549 fused_ordering(595) 00:14:54.549 fused_ordering(596) 00:14:54.549 fused_ordering(597) 00:14:54.549 fused_ordering(598) 00:14:54.549 fused_ordering(599) 00:14:54.549 fused_ordering(600) 00:14:54.549 fused_ordering(601) 00:14:54.549 fused_ordering(602) 00:14:54.549 fused_ordering(603) 00:14:54.549 fused_ordering(604) 00:14:54.549 fused_ordering(605) 00:14:54.549 fused_ordering(606) 00:14:54.549 fused_ordering(607) 00:14:54.549 fused_ordering(608) 00:14:54.549 fused_ordering(609) 00:14:54.549 fused_ordering(610) 00:14:54.549 fused_ordering(611) 00:14:54.549 fused_ordering(612) 00:14:54.549 fused_ordering(613) 00:14:54.549 fused_ordering(614) 00:14:54.549 fused_ordering(615) 00:14:55.113 fused_ordering(616) 00:14:55.114 fused_ordering(617) 00:14:55.114 fused_ordering(618) 00:14:55.114 fused_ordering(619) 00:14:55.114 fused_ordering(620) 00:14:55.114 fused_ordering(621) 00:14:55.114 fused_ordering(622) 00:14:55.114 fused_ordering(623) 00:14:55.114 fused_ordering(624) 00:14:55.114 fused_ordering(625) 00:14:55.114 fused_ordering(626) 00:14:55.114 fused_ordering(627) 00:14:55.114 fused_ordering(628) 00:14:55.114 fused_ordering(629) 00:14:55.114 fused_ordering(630) 00:14:55.114 fused_ordering(631) 00:14:55.114 fused_ordering(632) 00:14:55.114 fused_ordering(633) 00:14:55.114 fused_ordering(634) 00:14:55.114 fused_ordering(635) 00:14:55.114 fused_ordering(636) 00:14:55.114 fused_ordering(637) 00:14:55.114 fused_ordering(638) 00:14:55.114 fused_ordering(639) 00:14:55.114 fused_ordering(640) 00:14:55.114 fused_ordering(641) 00:14:55.114 fused_ordering(642) 00:14:55.114 fused_ordering(643) 00:14:55.114 fused_ordering(644) 00:14:55.114 fused_ordering(645) 00:14:55.114 fused_ordering(646) 00:14:55.114 fused_ordering(647) 00:14:55.114 fused_ordering(648) 00:14:55.114 fused_ordering(649) 00:14:55.114 fused_ordering(650) 00:14:55.114 fused_ordering(651) 00:14:55.114 fused_ordering(652) 00:14:55.114 fused_ordering(653) 00:14:55.114 fused_ordering(654) 00:14:55.114 fused_ordering(655) 00:14:55.114 fused_ordering(656) 00:14:55.114 fused_ordering(657) 00:14:55.114 fused_ordering(658) 00:14:55.114 fused_ordering(659) 00:14:55.114 fused_ordering(660) 00:14:55.114 fused_ordering(661) 00:14:55.114 fused_ordering(662) 00:14:55.114 fused_ordering(663) 00:14:55.114 fused_ordering(664) 00:14:55.114 fused_ordering(665) 00:14:55.114 fused_ordering(666) 00:14:55.114 fused_ordering(667) 00:14:55.114 fused_ordering(668) 00:14:55.114 fused_ordering(669) 00:14:55.114 fused_ordering(670) 00:14:55.114 fused_ordering(671) 00:14:55.114 fused_ordering(672) 00:14:55.114 fused_ordering(673) 00:14:55.114 fused_ordering(674) 00:14:55.114 fused_ordering(675) 00:14:55.114 fused_ordering(676) 00:14:55.114 fused_ordering(677) 00:14:55.114 fused_ordering(678) 00:14:55.114 fused_ordering(679) 00:14:55.114 fused_ordering(680) 00:14:55.114 fused_ordering(681) 00:14:55.114 fused_ordering(682) 00:14:55.114 fused_ordering(683) 00:14:55.114 fused_ordering(684) 00:14:55.114 fused_ordering(685) 00:14:55.114 fused_ordering(686) 00:14:55.114 fused_ordering(687) 00:14:55.114 fused_ordering(688) 00:14:55.114 fused_ordering(689) 00:14:55.114 fused_ordering(690) 00:14:55.114 fused_ordering(691) 00:14:55.114 fused_ordering(692) 00:14:55.114 fused_ordering(693) 00:14:55.114 fused_ordering(694) 00:14:55.114 fused_ordering(695) 00:14:55.114 fused_ordering(696) 00:14:55.114 fused_ordering(697) 00:14:55.114 fused_ordering(698) 00:14:55.114 fused_ordering(699) 00:14:55.114 fused_ordering(700) 00:14:55.114 fused_ordering(701) 00:14:55.114 fused_ordering(702) 00:14:55.114 fused_ordering(703) 00:14:55.114 fused_ordering(704) 00:14:55.114 fused_ordering(705) 00:14:55.114 fused_ordering(706) 00:14:55.114 fused_ordering(707) 00:14:55.114 fused_ordering(708) 00:14:55.114 fused_ordering(709) 00:14:55.114 fused_ordering(710) 00:14:55.114 fused_ordering(711) 00:14:55.114 fused_ordering(712) 00:14:55.114 fused_ordering(713) 00:14:55.114 fused_ordering(714) 00:14:55.114 fused_ordering(715) 00:14:55.114 fused_ordering(716) 00:14:55.114 fused_ordering(717) 00:14:55.114 fused_ordering(718) 00:14:55.114 fused_ordering(719) 00:14:55.114 fused_ordering(720) 00:14:55.114 fused_ordering(721) 00:14:55.114 fused_ordering(722) 00:14:55.114 fused_ordering(723) 00:14:55.114 fused_ordering(724) 00:14:55.114 fused_ordering(725) 00:14:55.114 fused_ordering(726) 00:14:55.114 fused_ordering(727) 00:14:55.114 fused_ordering(728) 00:14:55.114 fused_ordering(729) 00:14:55.114 fused_ordering(730) 00:14:55.114 fused_ordering(731) 00:14:55.114 fused_ordering(732) 00:14:55.114 fused_ordering(733) 00:14:55.114 fused_ordering(734) 00:14:55.114 fused_ordering(735) 00:14:55.114 fused_ordering(736) 00:14:55.114 fused_ordering(737) 00:14:55.114 fused_ordering(738) 00:14:55.114 fused_ordering(739) 00:14:55.114 fused_ordering(740) 00:14:55.114 fused_ordering(741) 00:14:55.114 fused_ordering(742) 00:14:55.114 fused_ordering(743) 00:14:55.114 fused_ordering(744) 00:14:55.114 fused_ordering(745) 00:14:55.114 fused_ordering(746) 00:14:55.114 fused_ordering(747) 00:14:55.114 fused_ordering(748) 00:14:55.114 fused_ordering(749) 00:14:55.114 fused_ordering(750) 00:14:55.114 fused_ordering(751) 00:14:55.114 fused_ordering(752) 00:14:55.114 fused_ordering(753) 00:14:55.114 fused_ordering(754) 00:14:55.114 fused_ordering(755) 00:14:55.114 fused_ordering(756) 00:14:55.114 fused_ordering(757) 00:14:55.114 fused_ordering(758) 00:14:55.114 fused_ordering(759) 00:14:55.114 fused_ordering(760) 00:14:55.114 fused_ordering(761) 00:14:55.114 fused_ordering(762) 00:14:55.114 fused_ordering(763) 00:14:55.114 fused_ordering(764) 00:14:55.114 fused_ordering(765) 00:14:55.114 fused_ordering(766) 00:14:55.114 fused_ordering(767) 00:14:55.114 fused_ordering(768) 00:14:55.114 fused_ordering(769) 00:14:55.114 fused_ordering(770) 00:14:55.114 fused_ordering(771) 00:14:55.114 fused_ordering(772) 00:14:55.114 fused_ordering(773) 00:14:55.114 fused_ordering(774) 00:14:55.114 fused_ordering(775) 00:14:55.114 fused_ordering(776) 00:14:55.114 fused_ordering(777) 00:14:55.114 fused_ordering(778) 00:14:55.114 fused_ordering(779) 00:14:55.114 fused_ordering(780) 00:14:55.114 fused_ordering(781) 00:14:55.114 fused_ordering(782) 00:14:55.114 fused_ordering(783) 00:14:55.114 fused_ordering(784) 00:14:55.114 fused_ordering(785) 00:14:55.114 fused_ordering(786) 00:14:55.114 fused_ordering(787) 00:14:55.114 fused_ordering(788) 00:14:55.114 fused_ordering(789) 00:14:55.114 fused_ordering(790) 00:14:55.114 fused_ordering(791) 00:14:55.114 fused_ordering(792) 00:14:55.114 fused_ordering(793) 00:14:55.114 fused_ordering(794) 00:14:55.114 fused_ordering(795) 00:14:55.114 fused_ordering(796) 00:14:55.114 fused_ordering(797) 00:14:55.114 fused_ordering(798) 00:14:55.114 fused_ordering(799) 00:14:55.114 fused_ordering(800) 00:14:55.114 fused_ordering(801) 00:14:55.114 fused_ordering(802) 00:14:55.114 fused_ordering(803) 00:14:55.114 fused_ordering(804) 00:14:55.114 fused_ordering(805) 00:14:55.114 fused_ordering(806) 00:14:55.114 fused_ordering(807) 00:14:55.114 fused_ordering(808) 00:14:55.114 fused_ordering(809) 00:14:55.114 fused_ordering(810) 00:14:55.114 fused_ordering(811) 00:14:55.114 fused_ordering(812) 00:14:55.114 fused_ordering(813) 00:14:55.114 fused_ordering(814) 00:14:55.114 fused_ordering(815) 00:14:55.114 fused_ordering(816) 00:14:55.114 fused_ordering(817) 00:14:55.114 fused_ordering(818) 00:14:55.114 fused_ordering(819) 00:14:55.114 fused_ordering(820) 00:14:55.679 fused_ordering(821) 00:14:55.679 fused_ordering(822) 00:14:55.679 fused_ordering(823) 00:14:55.679 fused_ordering(824) 00:14:55.679 fused_ordering(825) 00:14:55.679 fused_ordering(826) 00:14:55.679 fused_ordering(827) 00:14:55.679 fused_ordering(828) 00:14:55.679 fused_ordering(829) 00:14:55.679 fused_ordering(830) 00:14:55.679 fused_ordering(831) 00:14:55.679 fused_ordering(832) 00:14:55.679 fused_ordering(833) 00:14:55.679 fused_ordering(834) 00:14:55.679 fused_ordering(835) 00:14:55.679 fused_ordering(836) 00:14:55.679 fused_ordering(837) 00:14:55.679 fused_ordering(838) 00:14:55.679 fused_ordering(839) 00:14:55.679 fused_ordering(840) 00:14:55.679 fused_ordering(841) 00:14:55.679 fused_ordering(842) 00:14:55.679 fused_ordering(843) 00:14:55.679 fused_ordering(844) 00:14:55.679 fused_ordering(845) 00:14:55.679 fused_ordering(846) 00:14:55.679 fused_ordering(847) 00:14:55.679 fused_ordering(848) 00:14:55.679 fused_ordering(849) 00:14:55.679 fused_ordering(850) 00:14:55.679 fused_ordering(851) 00:14:55.679 fused_ordering(852) 00:14:55.679 fused_ordering(853) 00:14:55.679 fused_ordering(854) 00:14:55.679 fused_ordering(855) 00:14:55.679 fused_ordering(856) 00:14:55.679 fused_ordering(857) 00:14:55.679 fused_ordering(858) 00:14:55.679 fused_ordering(859) 00:14:55.679 fused_ordering(860) 00:14:55.679 fused_ordering(861) 00:14:55.679 fused_ordering(862) 00:14:55.679 fused_ordering(863) 00:14:55.679 fused_ordering(864) 00:14:55.679 fused_ordering(865) 00:14:55.679 fused_ordering(866) 00:14:55.679 fused_ordering(867) 00:14:55.679 fused_ordering(868) 00:14:55.679 fused_ordering(869) 00:14:55.679 fused_ordering(870) 00:14:55.679 fused_ordering(871) 00:14:55.679 fused_ordering(872) 00:14:55.679 fused_ordering(873) 00:14:55.679 fused_ordering(874) 00:14:55.679 fused_ordering(875) 00:14:55.679 fused_ordering(876) 00:14:55.679 fused_ordering(877) 00:14:55.679 fused_ordering(878) 00:14:55.679 fused_ordering(879) 00:14:55.679 fused_ordering(880) 00:14:55.679 fused_ordering(881) 00:14:55.679 fused_ordering(882) 00:14:55.679 fused_ordering(883) 00:14:55.679 fused_ordering(884) 00:14:55.679 fused_ordering(885) 00:14:55.679 fused_ordering(886) 00:14:55.679 fused_ordering(887) 00:14:55.679 fused_ordering(888) 00:14:55.679 fused_ordering(889) 00:14:55.679 fused_ordering(890) 00:14:55.679 fused_ordering(891) 00:14:55.679 fused_ordering(892) 00:14:55.679 fused_ordering(893) 00:14:55.679 fused_ordering(894) 00:14:55.679 fused_ordering(895) 00:14:55.679 fused_ordering(896) 00:14:55.679 fused_ordering(897) 00:14:55.679 fused_ordering(898) 00:14:55.679 fused_ordering(899) 00:14:55.679 fused_ordering(900) 00:14:55.679 fused_ordering(901) 00:14:55.679 fused_ordering(902) 00:14:55.679 fused_ordering(903) 00:14:55.679 fused_ordering(904) 00:14:55.679 fused_ordering(905) 00:14:55.679 fused_ordering(906) 00:14:55.679 fused_ordering(907) 00:14:55.679 fused_ordering(908) 00:14:55.679 fused_ordering(909) 00:14:55.679 fused_ordering(910) 00:14:55.679 fused_ordering(911) 00:14:55.679 fused_ordering(912) 00:14:55.679 fused_ordering(913) 00:14:55.679 fused_ordering(914) 00:14:55.679 fused_ordering(915) 00:14:55.679 fused_ordering(916) 00:14:55.679 fused_ordering(917) 00:14:55.679 fused_ordering(918) 00:14:55.679 fused_ordering(919) 00:14:55.679 fused_ordering(920) 00:14:55.679 fused_ordering(921) 00:14:55.679 fused_ordering(922) 00:14:55.679 fused_ordering(923) 00:14:55.679 fused_ordering(924) 00:14:55.679 fused_ordering(925) 00:14:55.679 fused_ordering(926) 00:14:55.679 fused_ordering(927) 00:14:55.679 fused_ordering(928) 00:14:55.679 fused_ordering(929) 00:14:55.679 fused_ordering(930) 00:14:55.679 fused_ordering(931) 00:14:55.679 fused_ordering(932) 00:14:55.679 fused_ordering(933) 00:14:55.679 fused_ordering(934) 00:14:55.679 fused_ordering(935) 00:14:55.679 fused_ordering(936) 00:14:55.679 fused_ordering(937) 00:14:55.679 fused_ordering(938) 00:14:55.679 fused_ordering(939) 00:14:55.679 fused_ordering(940) 00:14:55.679 fused_ordering(941) 00:14:55.679 fused_ordering(942) 00:14:55.679 fused_ordering(943) 00:14:55.679 fused_ordering(944) 00:14:55.679 fused_ordering(945) 00:14:55.679 fused_ordering(946) 00:14:55.679 fused_ordering(947) 00:14:55.679 fused_ordering(948) 00:14:55.679 fused_ordering(949) 00:14:55.679 fused_ordering(950) 00:14:55.680 fused_ordering(951) 00:14:55.680 fused_ordering(952) 00:14:55.680 fused_ordering(953) 00:14:55.680 fused_ordering(954) 00:14:55.680 fused_ordering(955) 00:14:55.680 fused_ordering(956) 00:14:55.680 fused_ordering(957) 00:14:55.680 fused_ordering(958) 00:14:55.680 fused_ordering(959) 00:14:55.680 fused_ordering(960) 00:14:55.680 fused_ordering(961) 00:14:55.680 fused_ordering(962) 00:14:55.680 fused_ordering(963) 00:14:55.680 fused_ordering(964) 00:14:55.680 fused_ordering(965) 00:14:55.680 fused_ordering(966) 00:14:55.680 fused_ordering(967) 00:14:55.680 fused_ordering(968) 00:14:55.680 fused_ordering(969) 00:14:55.680 fused_ordering(970) 00:14:55.680 fused_ordering(971) 00:14:55.680 fused_ordering(972) 00:14:55.680 fused_ordering(973) 00:14:55.680 fused_ordering(974) 00:14:55.680 fused_ordering(975) 00:14:55.680 fused_ordering(976) 00:14:55.680 fused_ordering(977) 00:14:55.680 fused_ordering(978) 00:14:55.680 fused_ordering(979) 00:14:55.680 fused_ordering(980) 00:14:55.680 fused_ordering(981) 00:14:55.680 fused_ordering(982) 00:14:55.680 fused_ordering(983) 00:14:55.680 fused_ordering(984) 00:14:55.680 fused_ordering(985) 00:14:55.680 fused_ordering(986) 00:14:55.680 fused_ordering(987) 00:14:55.680 fused_ordering(988) 00:14:55.680 fused_ordering(989) 00:14:55.680 fused_ordering(990) 00:14:55.680 fused_ordering(991) 00:14:55.680 fused_ordering(992) 00:14:55.680 fused_ordering(993) 00:14:55.680 fused_ordering(994) 00:14:55.680 fused_ordering(995) 00:14:55.680 fused_ordering(996) 00:14:55.680 fused_ordering(997) 00:14:55.680 fused_ordering(998) 00:14:55.680 fused_ordering(999) 00:14:55.680 fused_ordering(1000) 00:14:55.680 fused_ordering(1001) 00:14:55.680 fused_ordering(1002) 00:14:55.680 fused_ordering(1003) 00:14:55.680 fused_ordering(1004) 00:14:55.680 fused_ordering(1005) 00:14:55.680 fused_ordering(1006) 00:14:55.680 fused_ordering(1007) 00:14:55.680 fused_ordering(1008) 00:14:55.680 fused_ordering(1009) 00:14:55.680 fused_ordering(1010) 00:14:55.680 fused_ordering(1011) 00:14:55.680 fused_ordering(1012) 00:14:55.680 fused_ordering(1013) 00:14:55.680 fused_ordering(1014) 00:14:55.680 fused_ordering(1015) 00:14:55.680 fused_ordering(1016) 00:14:55.680 fused_ordering(1017) 00:14:55.680 fused_ordering(1018) 00:14:55.680 fused_ordering(1019) 00:14:55.680 fused_ordering(1020) 00:14:55.680 fused_ordering(1021) 00:14:55.680 fused_ordering(1022) 00:14:55.680 fused_ordering(1023) 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:55.680 rmmod nvme_tcp 00:14:55.680 rmmod nvme_fabrics 00:14:55.680 rmmod nvme_keyring 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 793635 ']' 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 793635 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 793635 ']' 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 793635 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 793635 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 793635' 00:14:55.680 killing process with pid 793635 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 793635 00:14:55.680 08:51:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 793635 00:14:55.938 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:55.938 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:55.938 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:55.938 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:55.939 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:14:55.939 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:55.939 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:14:55.939 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:55.939 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:55.939 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.939 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.939 08:51:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:58.474 00:14:58.474 real 0m7.536s 00:14:58.474 user 0m4.987s 00:14:58.474 sys 0m3.180s 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:58.474 ************************************ 00:14:58.474 END TEST nvmf_fused_ordering 00:14:58.474 ************************************ 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:58.474 ************************************ 00:14:58.474 START TEST nvmf_ns_masking 00:14:58.474 ************************************ 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:58.474 * Looking for test storage... 00:14:58.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # lcov --version 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:14:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.474 --rc genhtml_branch_coverage=1 00:14:58.474 --rc genhtml_function_coverage=1 00:14:58.474 --rc genhtml_legend=1 00:14:58.474 --rc geninfo_all_blocks=1 00:14:58.474 --rc geninfo_unexecuted_blocks=1 00:14:58.474 00:14:58.474 ' 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:14:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.474 --rc genhtml_branch_coverage=1 00:14:58.474 --rc genhtml_function_coverage=1 00:14:58.474 --rc genhtml_legend=1 00:14:58.474 --rc geninfo_all_blocks=1 00:14:58.474 --rc geninfo_unexecuted_blocks=1 00:14:58.474 00:14:58.474 ' 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:14:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.474 --rc genhtml_branch_coverage=1 00:14:58.474 --rc genhtml_function_coverage=1 00:14:58.474 --rc genhtml_legend=1 00:14:58.474 --rc geninfo_all_blocks=1 00:14:58.474 --rc geninfo_unexecuted_blocks=1 00:14:58.474 00:14:58.474 ' 00:14:58.474 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:14:58.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.474 --rc genhtml_branch_coverage=1 00:14:58.474 --rc genhtml_function_coverage=1 00:14:58.475 --rc genhtml_legend=1 00:14:58.475 --rc geninfo_all_blocks=1 00:14:58.475 --rc geninfo_unexecuted_blocks=1 00:14:58.475 00:14:58.475 ' 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:58.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d79bd854-9773-4870-bd00-28927cb7c7f5 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=327f8fbe-3641-438c-8f21-3a7338a645b0 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dab38901-052f-47af-8ce2-46ca288f33e8 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:58.475 08:51:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:00.375 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:00.375 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:00.375 Found net devices under 0000:09:00.0: cvl_0_0 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:00.375 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:00.376 Found net devices under 0000:09:00.1: cvl_0_1 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:00.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:15:00.376 00:15:00.376 --- 10.0.0.2 ping statistics --- 00:15:00.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.376 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:15:00.376 00:15:00.376 --- 10.0.0.1 ping statistics --- 00:15:00.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.376 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=796496 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 796496 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 796496 ']' 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.376 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.634 [2024-11-06 08:51:13.670082] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:15:00.634 [2024-11-06 08:51:13.670193] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.634 [2024-11-06 08:51:13.741852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.634 [2024-11-06 08:51:13.797886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.634 [2024-11-06 08:51:13.797943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.634 [2024-11-06 08:51:13.797957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.634 [2024-11-06 08:51:13.797967] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.634 [2024-11-06 08:51:13.797977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.634 [2024-11-06 08:51:13.798548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.634 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.634 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:00.635 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:00.635 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:00.635 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.892 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.892 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:01.150 [2024-11-06 08:51:14.248036] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.150 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:01.150 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:01.150 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:01.408 Malloc1 00:15:01.408 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:01.665 Malloc2 00:15:01.665 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:01.922 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:02.180 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.437 [2024-11-06 08:51:15.716955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.695 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:02.695 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dab38901-052f-47af-8ce2-46ca288f33e8 -a 10.0.0.2 -s 4420 -i 4 00:15:02.695 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:02.695 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:02.695 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.695 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:02.695 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:04.592 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:04.592 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:04.592 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:04.850 [ 0]:0x1 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a83f915b4bea41e6b3d5c4d5d8efcd19 00:15:04.850 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a83f915b4bea41e6b3d5c4d5d8efcd19 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.851 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:05.108 [ 0]:0x1 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a83f915b4bea41e6b3d5c4d5d8efcd19 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a83f915b4bea41e6b3d5c4d5d8efcd19 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:05.108 [ 1]:0x2 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21c64c5aba6947aa9260c8d678136b59 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21c64c5aba6947aa9260c8d678136b59 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:05.108 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.366 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.623 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:06.188 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:06.188 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dab38901-052f-47af-8ce2-46ca288f33e8 -a 10.0.0.2 -s 4420 -i 4 00:15:06.188 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:06.188 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:06.188 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.188 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:06.188 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:06.188 08:51:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.713 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.714 [ 0]:0x2 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21c64c5aba6947aa9260c8d678136b59 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21c64c5aba6947aa9260c8d678136b59 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.714 [ 0]:0x1 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a83f915b4bea41e6b3d5c4d5d8efcd19 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a83f915b4bea41e6b3d5c4d5d8efcd19 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.714 [ 1]:0x2 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21c64c5aba6947aa9260c8d678136b59 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21c64c5aba6947aa9260c8d678136b59 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.714 08:51:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:09.279 [ 0]:0x2 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:09.279 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:09.280 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21c64c5aba6947aa9260c8d678136b59 00:15:09.280 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21c64c5aba6947aa9260c8d678136b59 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:09.280 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:09.280 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:09.280 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.280 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:09.538 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:09.538 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dab38901-052f-47af-8ce2-46ca288f33e8 -a 10.0.0.2 -s 4420 -i 4 00:15:09.796 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:09.796 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:09.796 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.796 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:09.796 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:09.796 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.693 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.951 [ 0]:0x1 00:15:11.951 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.951 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a83f915b4bea41e6b3d5c4d5d8efcd19 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a83f915b4bea41e6b3d5c4d5d8efcd19 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.951 [ 1]:0x2 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21c64c5aba6947aa9260c8d678136b59 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21c64c5aba6947aa9260c8d678136b59 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.951 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:12.209 [ 0]:0x2 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:12.209 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21c64c5aba6947aa9260c8d678136b59 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21c64c5aba6947aa9260c8d678136b59 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.466 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:12.467 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:12.467 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:12.724 [2024-11-06 08:51:25.807431] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:12.724 request: 00:15:12.724 { 00:15:12.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:12.724 "nsid": 2, 00:15:12.724 "host": "nqn.2016-06.io.spdk:host1", 00:15:12.724 "method": "nvmf_ns_remove_host", 00:15:12.724 "req_id": 1 00:15:12.724 } 00:15:12.724 Got JSON-RPC error response 00:15:12.724 response: 00:15:12.724 { 00:15:12.724 "code": -32602, 00:15:12.724 "message": "Invalid parameters" 00:15:12.724 } 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:12.724 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.725 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:12.725 [ 0]:0x2 00:15:12.725 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:12.725 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.725 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=21c64c5aba6947aa9260c8d678136b59 00:15:12.725 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 21c64c5aba6947aa9260c8d678136b59 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.725 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:12.725 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=798116 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 798116 /var/tmp/host.sock 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 798116 ']' 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:12.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.982 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:12.982 [2024-11-06 08:51:26.166352] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:15:12.983 [2024-11-06 08:51:26.166446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798116 ] 00:15:12.983 [2024-11-06 08:51:26.232357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.239 [2024-11-06 08:51:26.290211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.496 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.496 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:13.496 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:13.753 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:14.010 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d79bd854-9773-4870-bd00-28927cb7c7f5 00:15:14.010 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:14.010 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D79BD85497734870BD0028927CB7C7F5 -i 00:15:14.267 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 327f8fbe-3641-438c-8f21-3a7338a645b0 00:15:14.267 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:14.268 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 327F8FBE3641438C8F213A7338A645B0 -i 00:15:14.525 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:14.783 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:15.039 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:15.039 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:15.604 nvme0n1 00:15:15.604 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:15.604 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:16.170 nvme1n2 00:15:16.170 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:16.170 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:16.170 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:16.170 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:16.170 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:16.427 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:16.427 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:16.427 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:16.427 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:16.685 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d79bd854-9773-4870-bd00-28927cb7c7f5 == \d\7\9\b\d\8\5\4\-\9\7\7\3\-\4\8\7\0\-\b\d\0\0\-\2\8\9\2\7\c\b\7\c\7\f\5 ]] 00:15:16.685 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:16.685 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:16.685 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:16.942 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 327f8fbe-3641-438c-8f21-3a7338a645b0 == \3\2\7\f\8\f\b\e\-\3\6\4\1\-\4\3\8\c\-\8\f\2\1\-\3\a\7\3\3\8\a\6\4\5\b\0 ]] 00:15:16.942 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:17.200 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:17.457 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d79bd854-9773-4870-bd00-28927cb7c7f5 00:15:17.457 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D79BD85497734870BD0028927CB7C7F5 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D79BD85497734870BD0028927CB7C7F5 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:17.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D79BD85497734870BD0028927CB7C7F5 00:15:17.715 [2024-11-06 08:51:30.869999] bdev.c:8607:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:17.715 [2024-11-06 08:51:30.870038] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:17.715 [2024-11-06 08:51:30.870056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.715 request: 00:15:17.715 { 00:15:17.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.715 "namespace": { 00:15:17.716 "bdev_name": "invalid", 00:15:17.716 "nsid": 1, 00:15:17.716 "nguid": "D79BD85497734870BD0028927CB7C7F5", 00:15:17.716 "no_auto_visible": false, 00:15:17.716 "no_metadata": false 00:15:17.716 }, 00:15:17.716 "method": "nvmf_subsystem_add_ns", 00:15:17.716 "req_id": 1 00:15:17.716 } 00:15:17.716 Got JSON-RPC error response 00:15:17.716 response: 00:15:17.716 { 00:15:17.716 "code": -32602, 00:15:17.716 "message": "Invalid parameters" 00:15:17.716 } 00:15:17.716 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:17.716 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:17.716 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:17.716 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:17.716 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d79bd854-9773-4870-bd00-28927cb7c7f5 00:15:17.716 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:17.716 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D79BD85497734870BD0028927CB7C7F5 -i 00:15:17.973 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 798116 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 798116 ']' 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 798116 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 798116 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 798116' 00:15:20.499 killing process with pid 798116 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 798116 00:15:20.499 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 798116 00:15:20.757 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:21.014 rmmod nvme_tcp 00:15:21.014 rmmod nvme_fabrics 00:15:21.014 rmmod nvme_keyring 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:21.014 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 796496 ']' 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 796496 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 796496 ']' 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 796496 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 796496 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 796496' 00:15:21.015 killing process with pid 796496 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 796496 00:15:21.015 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 796496 00:15:21.272 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:21.272 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:21.272 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:21.272 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:21.273 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:15:21.273 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:15:21.273 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:21.273 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:21.273 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:21.273 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.273 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.273 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:23.808 00:15:23.808 real 0m25.354s 00:15:23.808 user 0m37.046s 00:15:23.808 sys 0m4.568s 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:23.808 ************************************ 00:15:23.808 END TEST nvmf_ns_masking 00:15:23.808 ************************************ 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.808 ************************************ 00:15:23.808 START TEST nvmf_nvme_cli 00:15:23.808 ************************************ 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:23.808 * Looking for test storage... 00:15:23.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # lcov --version 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:23.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.808 --rc genhtml_branch_coverage=1 00:15:23.808 --rc genhtml_function_coverage=1 00:15:23.808 --rc genhtml_legend=1 00:15:23.808 --rc geninfo_all_blocks=1 00:15:23.808 --rc geninfo_unexecuted_blocks=1 00:15:23.808 00:15:23.808 ' 00:15:23.808 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:23.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.809 --rc genhtml_branch_coverage=1 00:15:23.809 --rc genhtml_function_coverage=1 00:15:23.809 --rc genhtml_legend=1 00:15:23.809 --rc geninfo_all_blocks=1 00:15:23.809 --rc geninfo_unexecuted_blocks=1 00:15:23.809 00:15:23.809 ' 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:23.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.809 --rc genhtml_branch_coverage=1 00:15:23.809 --rc genhtml_function_coverage=1 00:15:23.809 --rc genhtml_legend=1 00:15:23.809 --rc geninfo_all_blocks=1 00:15:23.809 --rc geninfo_unexecuted_blocks=1 00:15:23.809 00:15:23.809 ' 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:23.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.809 --rc genhtml_branch_coverage=1 00:15:23.809 --rc genhtml_function_coverage=1 00:15:23.809 --rc genhtml_legend=1 00:15:23.809 --rc geninfo_all_blocks=1 00:15:23.809 --rc geninfo_unexecuted_blocks=1 00:15:23.809 00:15:23.809 ' 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:23.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:23.809 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:25.710 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:25.710 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:25.711 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:25.711 Found net devices under 0000:09:00.0: cvl_0_0 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:25.711 Found net devices under 0000:09:00.1: cvl_0_1 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.711 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:25.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:15:25.970 00:15:25.970 --- 10.0.0.2 ping statistics --- 00:15:25.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.970 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:15:25.970 00:15:25.970 --- 10.0.0.1 ping statistics --- 00:15:25.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.970 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=801072 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 801072 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.970 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 801072 ']' 00:15:25.971 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.971 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.971 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.971 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.971 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.971 [2024-11-06 08:51:39.180595] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:15:25.971 [2024-11-06 08:51:39.180684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.971 [2024-11-06 08:51:39.256852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.229 [2024-11-06 08:51:39.317454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.229 [2024-11-06 08:51:39.317508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.229 [2024-11-06 08:51:39.317522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.229 [2024-11-06 08:51:39.317534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.229 [2024-11-06 08:51:39.317543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.229 [2024-11-06 08:51:39.319051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.229 [2024-11-06 08:51:39.319110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.229 [2024-11-06 08:51:39.319176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.229 [2024-11-06 08:51:39.319179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.229 [2024-11-06 08:51:39.479347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.229 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.487 Malloc0 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.487 Malloc1 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.487 [2024-11-06 08:51:39.586544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:15:26.487 00:15:26.487 Discovery Log Number of Records 2, Generation counter 2 00:15:26.487 =====Discovery Log Entry 0====== 00:15:26.487 trtype: tcp 00:15:26.487 adrfam: ipv4 00:15:26.487 subtype: current discovery subsystem 00:15:26.487 treq: not required 00:15:26.487 portid: 0 00:15:26.487 trsvcid: 4420 00:15:26.487 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:26.487 traddr: 10.0.0.2 00:15:26.487 eflags: explicit discovery connections, duplicate discovery information 00:15:26.487 sectype: none 00:15:26.487 =====Discovery Log Entry 1====== 00:15:26.487 trtype: tcp 00:15:26.487 adrfam: ipv4 00:15:26.487 subtype: nvme subsystem 00:15:26.487 treq: not required 00:15:26.487 portid: 0 00:15:26.487 trsvcid: 4420 00:15:26.487 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:26.487 traddr: 10.0.0.2 00:15:26.487 eflags: none 00:15:26.487 sectype: none 00:15:26.487 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:26.745 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:27.311 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:27.311 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:27.311 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:27.311 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:27.311 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:27.311 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:29.275 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:29.275 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:29.275 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.275 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:29.275 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.275 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:29.275 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:29.275 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:29.275 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:29.276 /dev/nvme0n2 ]] 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.276 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:29.533 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:29.533 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.533 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:29.533 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.534 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:29.534 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:15:29.534 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.534 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:29.534 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:15:29.534 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:29.534 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:29.534 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:29.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:29.792 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:29.792 rmmod nvme_tcp 00:15:29.792 rmmod nvme_fabrics 00:15:29.792 rmmod nvme_keyring 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 801072 ']' 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 801072 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 801072 ']' 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 801072 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 801072 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 801072' 00:15:29.792 killing process with pid 801072 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 801072 00:15:29.792 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 801072 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.357 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:32.266 00:15:32.266 real 0m8.748s 00:15:32.266 user 0m16.628s 00:15:32.266 sys 0m2.427s 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:32.266 ************************************ 00:15:32.266 END TEST nvmf_nvme_cli 00:15:32.266 ************************************ 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.266 ************************************ 00:15:32.266 START TEST nvmf_vfio_user 00:15:32.266 ************************************ 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:32.266 * Looking for test storage... 00:15:32.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1689 -- # lcov --version 00:15:32.266 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:15:32.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.565 --rc genhtml_branch_coverage=1 00:15:32.565 --rc genhtml_function_coverage=1 00:15:32.565 --rc genhtml_legend=1 00:15:32.565 --rc geninfo_all_blocks=1 00:15:32.565 --rc geninfo_unexecuted_blocks=1 00:15:32.565 00:15:32.565 ' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:15:32.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.565 --rc genhtml_branch_coverage=1 00:15:32.565 --rc genhtml_function_coverage=1 00:15:32.565 --rc genhtml_legend=1 00:15:32.565 --rc geninfo_all_blocks=1 00:15:32.565 --rc geninfo_unexecuted_blocks=1 00:15:32.565 00:15:32.565 ' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:15:32.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.565 --rc genhtml_branch_coverage=1 00:15:32.565 --rc genhtml_function_coverage=1 00:15:32.565 --rc genhtml_legend=1 00:15:32.565 --rc geninfo_all_blocks=1 00:15:32.565 --rc geninfo_unexecuted_blocks=1 00:15:32.565 00:15:32.565 ' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:15:32.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.565 --rc genhtml_branch_coverage=1 00:15:32.565 --rc genhtml_function_coverage=1 00:15:32.565 --rc genhtml_legend=1 00:15:32.565 --rc geninfo_all_blocks=1 00:15:32.565 --rc geninfo_unexecuted_blocks=1 00:15:32.565 00:15:32.565 ' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:32.565 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=801973 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 801973' 00:15:32.566 Process pid: 801973 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 801973 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 801973 ']' 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.566 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:32.566 [2024-11-06 08:51:45.653447] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:15:32.566 [2024-11-06 08:51:45.653540] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.566 [2024-11-06 08:51:45.721240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.566 [2024-11-06 08:51:45.777508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.566 [2024-11-06 08:51:45.777563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.566 [2024-11-06 08:51:45.777576] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.566 [2024-11-06 08:51:45.777587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.566 [2024-11-06 08:51:45.777597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.566 [2024-11-06 08:51:45.781852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.566 [2024-11-06 08:51:45.781922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.566 [2024-11-06 08:51:45.781990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.566 [2024-11-06 08:51:45.781993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.824 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.824 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:32.824 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:33.756 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:34.013 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:34.013 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:34.013 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:34.013 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:34.013 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:34.579 Malloc1 00:15:34.579 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:34.837 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:35.095 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:35.352 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.353 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:35.353 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:35.610 Malloc2 00:15:35.610 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:35.868 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:36.125 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:36.383 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:36.383 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:36.383 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.383 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:36.383 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:36.383 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:36.643 [2024-11-06 08:51:49.678860] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:15:36.643 [2024-11-06 08:51:49.678919] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802513 ] 00:15:36.643 [2024-11-06 08:51:49.728014] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:36.643 [2024-11-06 08:51:49.737328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:36.643 [2024-11-06 08:51:49.737357] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdfc3897000 00:15:36.643 [2024-11-06 08:51:49.738320] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.643 [2024-11-06 08:51:49.739319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.643 [2024-11-06 08:51:49.740326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.643 [2024-11-06 08:51:49.741330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.643 [2024-11-06 08:51:49.742337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.643 [2024-11-06 08:51:49.743340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.643 [2024-11-06 08:51:49.744348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:36.643 [2024-11-06 08:51:49.745355] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:36.643 [2024-11-06 08:51:49.746360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:36.643 [2024-11-06 08:51:49.746391] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdfc388c000 00:15:36.643 [2024-11-06 08:51:49.747524] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.643 [2024-11-06 08:51:49.763180] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:36.643 [2024-11-06 08:51:49.763224] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:36.643 [2024-11-06 08:51:49.765461] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:36.643 [2024-11-06 08:51:49.765511] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:36.643 [2024-11-06 08:51:49.765597] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:36.643 [2024-11-06 08:51:49.765626] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:36.643 [2024-11-06 08:51:49.765637] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:36.643 [2024-11-06 08:51:49.766456] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:36.643 [2024-11-06 08:51:49.766477] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:36.643 [2024-11-06 08:51:49.766489] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:36.643 [2024-11-06 08:51:49.767462] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:36.643 [2024-11-06 08:51:49.767483] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:36.643 [2024-11-06 08:51:49.767496] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:36.643 [2024-11-06 08:51:49.768471] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:36.643 [2024-11-06 08:51:49.768489] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:36.643 [2024-11-06 08:51:49.769472] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:36.643 [2024-11-06 08:51:49.769490] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:36.643 [2024-11-06 08:51:49.769500] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:36.643 [2024-11-06 08:51:49.769511] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:36.643 [2024-11-06 08:51:49.769620] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:36.643 [2024-11-06 08:51:49.769628] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:36.643 [2024-11-06 08:51:49.769636] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:36.643 [2024-11-06 08:51:49.773842] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:36.643 [2024-11-06 08:51:49.774508] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:36.643 [2024-11-06 08:51:49.775513] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:36.643 [2024-11-06 08:51:49.776508] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:36.643 [2024-11-06 08:51:49.776649] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:36.643 [2024-11-06 08:51:49.777526] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:36.643 [2024-11-06 08:51:49.777544] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:36.643 [2024-11-06 08:51:49.777553] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:36.643 [2024-11-06 08:51:49.777576] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:36.643 [2024-11-06 08:51:49.777598] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:36.643 [2024-11-06 08:51:49.777622] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.643 [2024-11-06 08:51:49.777632] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.643 [2024-11-06 08:51:49.777638] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.643 [2024-11-06 08:51:49.777656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.643 [2024-11-06 08:51:49.777725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.777741] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:36.644 [2024-11-06 08:51:49.777749] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:36.644 [2024-11-06 08:51:49.777756] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:36.644 [2024-11-06 08:51:49.777763] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:36.644 [2024-11-06 08:51:49.777770] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:36.644 [2024-11-06 08:51:49.777778] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:36.644 [2024-11-06 08:51:49.777785] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.777797] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.777812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.777826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.777872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.644 [2024-11-06 08:51:49.777890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.644 [2024-11-06 08:51:49.777903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.644 [2024-11-06 08:51:49.777915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:36.644 [2024-11-06 08:51:49.777924] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.777935] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.777949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.777961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.777975] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:36.644 [2024-11-06 08:51:49.777985] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.777996] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778006] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778019] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.778033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.778100] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778117] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778130] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:36.644 [2024-11-06 08:51:49.778153] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:36.644 [2024-11-06 08:51:49.778160] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.644 [2024-11-06 08:51:49.778169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.778185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.778202] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:36.644 [2024-11-06 08:51:49.778218] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778232] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778243] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.644 [2024-11-06 08:51:49.778251] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.644 [2024-11-06 08:51:49.778260] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.644 [2024-11-06 08:51:49.778270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.778292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.778314] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778328] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778340] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:36.644 [2024-11-06 08:51:49.778348] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.644 [2024-11-06 08:51:49.778353] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.644 [2024-11-06 08:51:49.778363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.778374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.778387] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778398] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778412] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778423] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778431] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778439] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778448] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:36.644 [2024-11-06 08:51:49.778455] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:36.644 [2024-11-06 08:51:49.778464] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:36.644 [2024-11-06 08:51:49.778489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.778507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.778526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.778538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.778554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.778568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.778589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.778601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:36.644 [2024-11-06 08:51:49.778623] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:36.644 [2024-11-06 08:51:49.778633] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:36.644 [2024-11-06 08:51:49.778639] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:36.644 [2024-11-06 08:51:49.778644] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:36.644 [2024-11-06 08:51:49.778650] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:36.644 [2024-11-06 08:51:49.778659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:36.644 [2024-11-06 08:51:49.778671] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:36.644 [2024-11-06 08:51:49.778678] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:36.644 [2024-11-06 08:51:49.778684] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.644 [2024-11-06 08:51:49.778693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:36.644 [2024-11-06 08:51:49.778704] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:36.644 [2024-11-06 08:51:49.778711] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:36.644 [2024-11-06 08:51:49.778717] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.645 [2024-11-06 08:51:49.778725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:36.645 [2024-11-06 08:51:49.778741] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:36.645 [2024-11-06 08:51:49.778750] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:36.645 [2024-11-06 08:51:49.778756] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:36.645 [2024-11-06 08:51:49.778765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:36.645 [2024-11-06 08:51:49.778776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:36.645 [2024-11-06 08:51:49.778798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:36.645 [2024-11-06 08:51:49.778816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:36.645 [2024-11-06 08:51:49.778854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:36.645 ===================================================== 00:15:36.645 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.645 ===================================================== 00:15:36.645 Controller Capabilities/Features 00:15:36.645 ================================ 00:15:36.645 Vendor ID: 4e58 00:15:36.645 Subsystem Vendor ID: 4e58 00:15:36.645 Serial Number: SPDK1 00:15:36.645 Model Number: SPDK bdev Controller 00:15:36.645 Firmware Version: 25.01 00:15:36.645 Recommended Arb Burst: 6 00:15:36.645 IEEE OUI Identifier: 8d 6b 50 00:15:36.645 Multi-path I/O 00:15:36.645 May have multiple subsystem ports: Yes 00:15:36.645 May have multiple controllers: Yes 00:15:36.645 Associated with SR-IOV VF: No 00:15:36.645 Max Data Transfer Size: 131072 00:15:36.645 Max Number of Namespaces: 32 00:15:36.645 Max Number of I/O Queues: 127 00:15:36.645 NVMe Specification Version (VS): 1.3 00:15:36.645 NVMe Specification Version (Identify): 1.3 00:15:36.645 Maximum Queue Entries: 256 00:15:36.645 Contiguous Queues Required: Yes 00:15:36.645 Arbitration Mechanisms Supported 00:15:36.645 Weighted Round Robin: Not Supported 00:15:36.645 Vendor Specific: Not Supported 00:15:36.645 Reset Timeout: 15000 ms 00:15:36.645 Doorbell Stride: 4 bytes 00:15:36.645 NVM Subsystem Reset: Not Supported 00:15:36.645 Command Sets Supported 00:15:36.645 NVM Command Set: Supported 00:15:36.645 Boot Partition: Not Supported 00:15:36.645 Memory Page Size Minimum: 4096 bytes 00:15:36.645 Memory Page Size Maximum: 4096 bytes 00:15:36.645 Persistent Memory Region: Not Supported 00:15:36.645 Optional Asynchronous Events Supported 00:15:36.645 Namespace Attribute Notices: Supported 00:15:36.645 Firmware Activation Notices: Not Supported 00:15:36.645 ANA Change Notices: Not Supported 00:15:36.645 PLE Aggregate Log Change Notices: Not Supported 00:15:36.645 LBA Status Info Alert Notices: Not Supported 00:15:36.645 EGE Aggregate Log Change Notices: Not Supported 00:15:36.645 Normal NVM Subsystem Shutdown event: Not Supported 00:15:36.645 Zone Descriptor Change Notices: Not Supported 00:15:36.645 Discovery Log Change Notices: Not Supported 00:15:36.645 Controller Attributes 00:15:36.645 128-bit Host Identifier: Supported 00:15:36.645 Non-Operational Permissive Mode: Not Supported 00:15:36.645 NVM Sets: Not Supported 00:15:36.645 Read Recovery Levels: Not Supported 00:15:36.645 Endurance Groups: Not Supported 00:15:36.645 Predictable Latency Mode: Not Supported 00:15:36.645 Traffic Based Keep ALive: Not Supported 00:15:36.645 Namespace Granularity: Not Supported 00:15:36.645 SQ Associations: Not Supported 00:15:36.645 UUID List: Not Supported 00:15:36.645 Multi-Domain Subsystem: Not Supported 00:15:36.645 Fixed Capacity Management: Not Supported 00:15:36.645 Variable Capacity Management: Not Supported 00:15:36.645 Delete Endurance Group: Not Supported 00:15:36.645 Delete NVM Set: Not Supported 00:15:36.645 Extended LBA Formats Supported: Not Supported 00:15:36.645 Flexible Data Placement Supported: Not Supported 00:15:36.645 00:15:36.645 Controller Memory Buffer Support 00:15:36.645 ================================ 00:15:36.645 Supported: No 00:15:36.645 00:15:36.645 Persistent Memory Region Support 00:15:36.645 ================================ 00:15:36.645 Supported: No 00:15:36.645 00:15:36.645 Admin Command Set Attributes 00:15:36.645 ============================ 00:15:36.645 Security Send/Receive: Not Supported 00:15:36.645 Format NVM: Not Supported 00:15:36.645 Firmware Activate/Download: Not Supported 00:15:36.645 Namespace Management: Not Supported 00:15:36.645 Device Self-Test: Not Supported 00:15:36.645 Directives: Not Supported 00:15:36.645 NVMe-MI: Not Supported 00:15:36.645 Virtualization Management: Not Supported 00:15:36.645 Doorbell Buffer Config: Not Supported 00:15:36.645 Get LBA Status Capability: Not Supported 00:15:36.645 Command & Feature Lockdown Capability: Not Supported 00:15:36.645 Abort Command Limit: 4 00:15:36.645 Async Event Request Limit: 4 00:15:36.645 Number of Firmware Slots: N/A 00:15:36.645 Firmware Slot 1 Read-Only: N/A 00:15:36.645 Firmware Activation Without Reset: N/A 00:15:36.645 Multiple Update Detection Support: N/A 00:15:36.645 Firmware Update Granularity: No Information Provided 00:15:36.645 Per-Namespace SMART Log: No 00:15:36.645 Asymmetric Namespace Access Log Page: Not Supported 00:15:36.645 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:36.645 Command Effects Log Page: Supported 00:15:36.645 Get Log Page Extended Data: Supported 00:15:36.645 Telemetry Log Pages: Not Supported 00:15:36.645 Persistent Event Log Pages: Not Supported 00:15:36.645 Supported Log Pages Log Page: May Support 00:15:36.645 Commands Supported & Effects Log Page: Not Supported 00:15:36.645 Feature Identifiers & Effects Log Page:May Support 00:15:36.645 NVMe-MI Commands & Effects Log Page: May Support 00:15:36.645 Data Area 4 for Telemetry Log: Not Supported 00:15:36.645 Error Log Page Entries Supported: 128 00:15:36.645 Keep Alive: Supported 00:15:36.645 Keep Alive Granularity: 10000 ms 00:15:36.645 00:15:36.645 NVM Command Set Attributes 00:15:36.645 ========================== 00:15:36.645 Submission Queue Entry Size 00:15:36.645 Max: 64 00:15:36.645 Min: 64 00:15:36.645 Completion Queue Entry Size 00:15:36.645 Max: 16 00:15:36.645 Min: 16 00:15:36.645 Number of Namespaces: 32 00:15:36.645 Compare Command: Supported 00:15:36.645 Write Uncorrectable Command: Not Supported 00:15:36.645 Dataset Management Command: Supported 00:15:36.645 Write Zeroes Command: Supported 00:15:36.645 Set Features Save Field: Not Supported 00:15:36.645 Reservations: Not Supported 00:15:36.645 Timestamp: Not Supported 00:15:36.645 Copy: Supported 00:15:36.645 Volatile Write Cache: Present 00:15:36.645 Atomic Write Unit (Normal): 1 00:15:36.645 Atomic Write Unit (PFail): 1 00:15:36.645 Atomic Compare & Write Unit: 1 00:15:36.645 Fused Compare & Write: Supported 00:15:36.645 Scatter-Gather List 00:15:36.645 SGL Command Set: Supported (Dword aligned) 00:15:36.645 SGL Keyed: Not Supported 00:15:36.645 SGL Bit Bucket Descriptor: Not Supported 00:15:36.645 SGL Metadata Pointer: Not Supported 00:15:36.645 Oversized SGL: Not Supported 00:15:36.645 SGL Metadata Address: Not Supported 00:15:36.645 SGL Offset: Not Supported 00:15:36.645 Transport SGL Data Block: Not Supported 00:15:36.645 Replay Protected Memory Block: Not Supported 00:15:36.645 00:15:36.645 Firmware Slot Information 00:15:36.645 ========================= 00:15:36.645 Active slot: 1 00:15:36.645 Slot 1 Firmware Revision: 25.01 00:15:36.645 00:15:36.645 00:15:36.645 Commands Supported and Effects 00:15:36.645 ============================== 00:15:36.645 Admin Commands 00:15:36.645 -------------- 00:15:36.645 Get Log Page (02h): Supported 00:15:36.645 Identify (06h): Supported 00:15:36.645 Abort (08h): Supported 00:15:36.645 Set Features (09h): Supported 00:15:36.645 Get Features (0Ah): Supported 00:15:36.645 Asynchronous Event Request (0Ch): Supported 00:15:36.645 Keep Alive (18h): Supported 00:15:36.645 I/O Commands 00:15:36.645 ------------ 00:15:36.645 Flush (00h): Supported LBA-Change 00:15:36.645 Write (01h): Supported LBA-Change 00:15:36.645 Read (02h): Supported 00:15:36.645 Compare (05h): Supported 00:15:36.645 Write Zeroes (08h): Supported LBA-Change 00:15:36.645 Dataset Management (09h): Supported LBA-Change 00:15:36.645 Copy (19h): Supported LBA-Change 00:15:36.645 00:15:36.645 Error Log 00:15:36.645 ========= 00:15:36.645 00:15:36.645 Arbitration 00:15:36.645 =========== 00:15:36.645 Arbitration Burst: 1 00:15:36.645 00:15:36.645 Power Management 00:15:36.645 ================ 00:15:36.645 Number of Power States: 1 00:15:36.645 Current Power State: Power State #0 00:15:36.645 Power State #0: 00:15:36.646 Max Power: 0.00 W 00:15:36.646 Non-Operational State: Operational 00:15:36.646 Entry Latency: Not Reported 00:15:36.646 Exit Latency: Not Reported 00:15:36.646 Relative Read Throughput: 0 00:15:36.646 Relative Read Latency: 0 00:15:36.646 Relative Write Throughput: 0 00:15:36.646 Relative Write Latency: 0 00:15:36.646 Idle Power: Not Reported 00:15:36.646 Active Power: Not Reported 00:15:36.646 Non-Operational Permissive Mode: Not Supported 00:15:36.646 00:15:36.646 Health Information 00:15:36.646 ================== 00:15:36.646 Critical Warnings: 00:15:36.646 Available Spare Space: OK 00:15:36.646 Temperature: OK 00:15:36.646 Device Reliability: OK 00:15:36.646 Read Only: No 00:15:36.646 Volatile Memory Backup: OK 00:15:36.646 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:36.646 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:36.646 Available Spare: 0% 00:15:36.646 Available Sp[2024-11-06 08:51:49.778990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:36.646 [2024-11-06 08:51:49.779007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:36.646 [2024-11-06 08:51:49.779056] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:36.646 [2024-11-06 08:51:49.779074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.646 [2024-11-06 08:51:49.779085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.646 [2024-11-06 08:51:49.779099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.646 [2024-11-06 08:51:49.779109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:36.646 [2024-11-06 08:51:49.779541] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:36.646 [2024-11-06 08:51:49.779561] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:36.646 [2024-11-06 08:51:49.780540] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:36.646 [2024-11-06 08:51:49.780615] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:36.646 [2024-11-06 08:51:49.780629] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:36.646 [2024-11-06 08:51:49.781550] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:36.646 [2024-11-06 08:51:49.781573] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:36.646 [2024-11-06 08:51:49.781626] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:36.646 [2024-11-06 08:51:49.783590] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:36.646 are Threshold: 0% 00:15:36.646 Life Percentage Used: 0% 00:15:36.646 Data Units Read: 0 00:15:36.646 Data Units Written: 0 00:15:36.646 Host Read Commands: 0 00:15:36.646 Host Write Commands: 0 00:15:36.646 Controller Busy Time: 0 minutes 00:15:36.646 Power Cycles: 0 00:15:36.646 Power On Hours: 0 hours 00:15:36.646 Unsafe Shutdowns: 0 00:15:36.646 Unrecoverable Media Errors: 0 00:15:36.646 Lifetime Error Log Entries: 0 00:15:36.646 Warning Temperature Time: 0 minutes 00:15:36.646 Critical Temperature Time: 0 minutes 00:15:36.646 00:15:36.646 Number of Queues 00:15:36.646 ================ 00:15:36.646 Number of I/O Submission Queues: 127 00:15:36.646 Number of I/O Completion Queues: 127 00:15:36.646 00:15:36.646 Active Namespaces 00:15:36.646 ================= 00:15:36.646 Namespace ID:1 00:15:36.646 Error Recovery Timeout: Unlimited 00:15:36.646 Command Set Identifier: NVM (00h) 00:15:36.646 Deallocate: Supported 00:15:36.646 Deallocated/Unwritten Error: Not Supported 00:15:36.646 Deallocated Read Value: Unknown 00:15:36.646 Deallocate in Write Zeroes: Not Supported 00:15:36.646 Deallocated Guard Field: 0xFFFF 00:15:36.646 Flush: Supported 00:15:36.646 Reservation: Supported 00:15:36.646 Namespace Sharing Capabilities: Multiple Controllers 00:15:36.646 Size (in LBAs): 131072 (0GiB) 00:15:36.646 Capacity (in LBAs): 131072 (0GiB) 00:15:36.646 Utilization (in LBAs): 131072 (0GiB) 00:15:36.646 NGUID: C066DC4F8866495A878B5255C0C6EBA6 00:15:36.646 UUID: c066dc4f-8866-495a-878b-5255c0c6eba6 00:15:36.646 Thin Provisioning: Not Supported 00:15:36.646 Per-NS Atomic Units: Yes 00:15:36.646 Atomic Boundary Size (Normal): 0 00:15:36.646 Atomic Boundary Size (PFail): 0 00:15:36.646 Atomic Boundary Offset: 0 00:15:36.646 Maximum Single Source Range Length: 65535 00:15:36.646 Maximum Copy Length: 65535 00:15:36.646 Maximum Source Range Count: 1 00:15:36.646 NGUID/EUI64 Never Reused: No 00:15:36.646 Namespace Write Protected: No 00:15:36.646 Number of LBA Formats: 1 00:15:36.646 Current LBA Format: LBA Format #00 00:15:36.646 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:36.646 00:15:36.646 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:36.904 [2024-11-06 08:51:50.045760] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:42.165 Initializing NVMe Controllers 00:15:42.165 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:42.165 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:42.165 Initialization complete. Launching workers. 00:15:42.165 ======================================================== 00:15:42.165 Latency(us) 00:15:42.165 Device Information : IOPS MiB/s Average min max 00:15:42.165 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33980.18 132.74 3765.57 1159.43 8297.38 00:15:42.165 ======================================================== 00:15:42.165 Total : 33980.18 132.74 3765.57 1159.43 8297.38 00:15:42.165 00:15:42.165 [2024-11-06 08:51:55.067744] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.165 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:42.165 [2024-11-06 08:51:55.333963] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.425 Initializing NVMe Controllers 00:15:47.425 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:47.425 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:47.425 Initialization complete. Launching workers. 00:15:47.425 ======================================================== 00:15:47.425 Latency(us) 00:15:47.425 Device Information : IOPS MiB/s Average min max 00:15:47.425 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16038.20 62.65 7990.87 6200.68 14621.87 00:15:47.425 ======================================================== 00:15:47.425 Total : 16038.20 62.65 7990.87 6200.68 14621.87 00:15:47.425 00:15:47.426 [2024-11-06 08:52:00.373079] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.426 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:47.426 [2024-11-06 08:52:00.595200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:52.687 [2024-11-06 08:52:05.674243] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:52.687 Initializing NVMe Controllers 00:15:52.687 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:52.687 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:52.687 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:52.687 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:52.687 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:52.687 Initialization complete. Launching workers. 00:15:52.687 Starting thread on core 2 00:15:52.687 Starting thread on core 3 00:15:52.687 Starting thread on core 1 00:15:52.687 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:52.945 [2024-11-06 08:52:06.012301] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.225 [2024-11-06 08:52:09.110095] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.225 Initializing NVMe Controllers 00:15:56.225 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.225 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.225 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:56.225 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:56.225 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:56.225 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:56.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:56.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:56.225 Initialization complete. Launching workers. 00:15:56.225 Starting thread on core 1 with urgent priority queue 00:15:56.225 Starting thread on core 2 with urgent priority queue 00:15:56.225 Starting thread on core 3 with urgent priority queue 00:15:56.225 Starting thread on core 0 with urgent priority queue 00:15:56.225 SPDK bdev Controller (SPDK1 ) core 0: 1007.00 IO/s 99.30 secs/100000 ios 00:15:56.225 SPDK bdev Controller (SPDK1 ) core 1: 1272.00 IO/s 78.62 secs/100000 ios 00:15:56.225 SPDK bdev Controller (SPDK1 ) core 2: 1219.33 IO/s 82.01 secs/100000 ios 00:15:56.225 SPDK bdev Controller (SPDK1 ) core 3: 1019.67 IO/s 98.07 secs/100000 ios 00:15:56.225 ======================================================== 00:15:56.225 00:15:56.225 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:56.225 [2024-11-06 08:52:09.434355] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:56.225 Initializing NVMe Controllers 00:15:56.225 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.225 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:56.225 Namespace ID: 1 size: 0GB 00:15:56.225 Initialization complete. 00:15:56.225 INFO: using host memory buffer for IO 00:15:56.225 Hello world! 00:15:56.225 [2024-11-06 08:52:09.467990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:56.482 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:56.739 [2024-11-06 08:52:09.787305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:57.671 Initializing NVMe Controllers 00:15:57.671 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.671 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.671 Initialization complete. Launching workers. 00:15:57.671 submit (in ns) avg, min, max = 6757.9, 3560.0, 4006385.6 00:15:57.671 complete (in ns) avg, min, max = 29816.3, 2063.3, 6009104.4 00:15:57.671 00:15:57.671 Submit histogram 00:15:57.671 ================ 00:15:57.671 Range in us Cumulative Count 00:15:57.671 3.556 - 3.579: 0.1745% ( 23) 00:15:57.671 3.579 - 3.603: 2.2227% ( 270) 00:15:57.671 3.603 - 3.627: 10.5219% ( 1094) 00:15:57.671 3.627 - 3.650: 22.9176% ( 1634) 00:15:57.671 3.650 - 3.674: 31.3609% ( 1113) 00:15:57.671 3.674 - 3.698: 37.8547% ( 856) 00:15:57.671 3.698 - 3.721: 44.0146% ( 812) 00:15:57.671 3.721 - 3.745: 51.0241% ( 924) 00:15:57.671 3.745 - 3.769: 57.0096% ( 789) 00:15:57.671 3.769 - 3.793: 60.9467% ( 519) 00:15:57.671 3.793 - 3.816: 64.0570% ( 410) 00:15:57.671 3.816 - 3.840: 66.6287% ( 339) 00:15:57.671 3.840 - 3.864: 70.2397% ( 476) 00:15:57.671 3.864 - 3.887: 74.4197% ( 551) 00:15:57.671 3.887 - 3.911: 78.2734% ( 508) 00:15:57.671 3.911 - 3.935: 81.2320% ( 390) 00:15:57.671 3.935 - 3.959: 83.4168% ( 288) 00:15:57.671 3.959 - 3.982: 85.3512% ( 255) 00:15:57.671 3.982 - 4.006: 87.2098% ( 245) 00:15:57.671 4.006 - 4.030: 88.4843% ( 168) 00:15:57.671 4.030 - 4.053: 89.5160% ( 136) 00:15:57.671 4.053 - 4.077: 90.2974% ( 103) 00:15:57.671 4.077 - 4.101: 91.0105% ( 94) 00:15:57.671 4.101 - 4.124: 91.6780% ( 88) 00:15:57.671 4.124 - 4.148: 92.2242% ( 72) 00:15:57.671 4.148 - 4.172: 92.6339% ( 54) 00:15:57.671 4.172 - 4.196: 93.0360% ( 53) 00:15:57.671 4.196 - 4.219: 93.3166% ( 37) 00:15:57.671 4.219 - 4.243: 93.5442% ( 30) 00:15:57.671 4.243 - 4.267: 93.7035% ( 21) 00:15:57.671 4.267 - 4.290: 93.8780% ( 23) 00:15:57.671 4.290 - 4.314: 94.0373% ( 21) 00:15:57.671 4.314 - 4.338: 94.1739% ( 18) 00:15:57.671 4.338 - 4.361: 94.2877% ( 15) 00:15:57.671 4.361 - 4.385: 94.3939% ( 14) 00:15:57.671 4.385 - 4.409: 94.5456% ( 20) 00:15:57.671 4.409 - 4.433: 94.6442% ( 13) 00:15:57.671 4.433 - 4.456: 94.7504% ( 14) 00:15:57.671 4.456 - 4.480: 94.8642% ( 15) 00:15:57.671 4.480 - 4.504: 94.9325% ( 9) 00:15:57.671 4.504 - 4.527: 95.0083% ( 10) 00:15:57.671 4.527 - 4.551: 95.0614% ( 7) 00:15:57.671 4.551 - 4.575: 95.1297% ( 9) 00:15:57.671 4.575 - 4.599: 95.1752% ( 6) 00:15:57.671 4.599 - 4.622: 95.2208% ( 6) 00:15:57.671 4.622 - 4.646: 95.2663% ( 6) 00:15:57.671 4.646 - 4.670: 95.3194% ( 7) 00:15:57.671 4.670 - 4.693: 95.3725% ( 7) 00:15:57.671 4.693 - 4.717: 95.4332% ( 8) 00:15:57.671 4.717 - 4.741: 95.4711% ( 5) 00:15:57.671 4.741 - 4.764: 95.5242% ( 7) 00:15:57.671 4.764 - 4.788: 95.6001% ( 10) 00:15:57.671 4.788 - 4.812: 95.6607% ( 8) 00:15:57.671 4.812 - 4.836: 95.6987% ( 5) 00:15:57.671 4.836 - 4.859: 95.7897% ( 12) 00:15:57.671 4.859 - 4.883: 95.8504% ( 8) 00:15:57.671 4.883 - 4.907: 95.9035% ( 7) 00:15:57.671 4.907 - 4.930: 95.9718% ( 9) 00:15:57.671 4.930 - 4.954: 96.0932% ( 16) 00:15:57.671 4.954 - 4.978: 96.1538% ( 8) 00:15:57.671 4.978 - 5.001: 96.2373% ( 11) 00:15:57.671 5.001 - 5.025: 96.3207% ( 11) 00:15:57.671 5.025 - 5.049: 96.4194% ( 13) 00:15:57.671 5.049 - 5.073: 96.5028% ( 11) 00:15:57.671 5.073 - 5.096: 96.5559% ( 7) 00:15:57.671 5.096 - 5.120: 96.6394% ( 11) 00:15:57.671 5.120 - 5.144: 96.7152% ( 10) 00:15:57.671 5.144 - 5.167: 96.8290% ( 15) 00:15:57.671 5.167 - 5.191: 96.9125% ( 11) 00:15:57.671 5.191 - 5.215: 96.9656% ( 7) 00:15:57.671 5.215 - 5.239: 97.0035% ( 5) 00:15:57.671 5.239 - 5.262: 97.0945% ( 12) 00:15:57.671 5.262 - 5.286: 97.1704% ( 10) 00:15:57.671 5.286 - 5.310: 97.2538% ( 11) 00:15:57.671 5.310 - 5.333: 97.3373% ( 11) 00:15:57.671 5.333 - 5.357: 97.3525% ( 2) 00:15:57.671 5.357 - 5.381: 97.4131% ( 8) 00:15:57.671 5.381 - 5.404: 97.4435% ( 4) 00:15:57.671 5.404 - 5.428: 97.5269% ( 11) 00:15:57.671 5.428 - 5.452: 97.5724% ( 6) 00:15:57.671 5.452 - 5.476: 97.5952% ( 3) 00:15:57.671 5.476 - 5.499: 97.6407% ( 6) 00:15:57.671 5.499 - 5.523: 97.6938% ( 7) 00:15:57.671 5.523 - 5.547: 97.7318% ( 5) 00:15:57.671 5.547 - 5.570: 97.7469% ( 2) 00:15:57.671 5.570 - 5.594: 97.7545% ( 1) 00:15:57.671 5.594 - 5.618: 97.7773% ( 3) 00:15:57.671 5.618 - 5.641: 97.8000% ( 3) 00:15:57.671 5.641 - 5.665: 97.8304% ( 4) 00:15:57.671 5.665 - 5.689: 97.8455% ( 2) 00:15:57.671 5.689 - 5.713: 97.8531% ( 1) 00:15:57.671 5.713 - 5.736: 97.8759% ( 3) 00:15:57.671 5.736 - 5.760: 97.8911% ( 2) 00:15:57.672 5.784 - 5.807: 97.9214% ( 4) 00:15:57.672 5.807 - 5.831: 97.9290% ( 1) 00:15:57.672 5.879 - 5.902: 97.9366% ( 1) 00:15:57.672 5.902 - 5.926: 97.9518% ( 2) 00:15:57.672 5.926 - 5.950: 97.9593% ( 1) 00:15:57.672 5.950 - 5.973: 97.9669% ( 1) 00:15:57.672 5.973 - 5.997: 97.9745% ( 1) 00:15:57.672 5.997 - 6.021: 97.9821% ( 1) 00:15:57.672 6.021 - 6.044: 97.9973% ( 2) 00:15:57.672 6.068 - 6.116: 98.0124% ( 2) 00:15:57.672 6.116 - 6.163: 98.0200% ( 1) 00:15:57.672 6.163 - 6.210: 98.0276% ( 1) 00:15:57.672 6.210 - 6.258: 98.0352% ( 1) 00:15:57.672 6.258 - 6.305: 98.0504% ( 2) 00:15:57.672 6.305 - 6.353: 98.0580% ( 1) 00:15:57.672 6.400 - 6.447: 98.0807% ( 3) 00:15:57.672 6.495 - 6.542: 98.0959% ( 2) 00:15:57.672 6.590 - 6.637: 98.1035% ( 1) 00:15:57.672 6.637 - 6.684: 98.1186% ( 2) 00:15:57.672 6.732 - 6.779: 98.1262% ( 1) 00:15:57.672 6.874 - 6.921: 98.1338% ( 1) 00:15:57.672 7.064 - 7.111: 98.1414% ( 1) 00:15:57.672 7.111 - 7.159: 98.1566% ( 2) 00:15:57.672 7.159 - 7.206: 98.1642% ( 1) 00:15:57.672 7.206 - 7.253: 98.1793% ( 2) 00:15:57.672 7.253 - 7.301: 98.1869% ( 1) 00:15:57.672 7.348 - 7.396: 98.1945% ( 1) 00:15:57.672 7.443 - 7.490: 98.2021% ( 1) 00:15:57.672 7.538 - 7.585: 98.2173% ( 2) 00:15:57.672 7.822 - 7.870: 98.2324% ( 2) 00:15:57.672 7.870 - 7.917: 98.2400% ( 1) 00:15:57.672 7.917 - 7.964: 98.2552% ( 2) 00:15:57.672 8.154 - 8.201: 98.2628% ( 1) 00:15:57.672 8.296 - 8.344: 98.2704% ( 1) 00:15:57.672 8.391 - 8.439: 98.2780% ( 1) 00:15:57.672 8.439 - 8.486: 98.2855% ( 1) 00:15:57.672 8.533 - 8.581: 98.3007% ( 2) 00:15:57.672 8.581 - 8.628: 98.3083% ( 1) 00:15:57.672 8.818 - 8.865: 98.3159% ( 1) 00:15:57.672 8.865 - 8.913: 98.3235% ( 1) 00:15:57.672 9.055 - 9.102: 98.3311% ( 1) 00:15:57.672 9.102 - 9.150: 98.3462% ( 2) 00:15:57.672 9.150 - 9.197: 98.3614% ( 2) 00:15:57.672 9.197 - 9.244: 98.3690% ( 1) 00:15:57.672 9.244 - 9.292: 98.3842% ( 2) 00:15:57.672 9.292 - 9.339: 98.3917% ( 1) 00:15:57.672 9.529 - 9.576: 98.3993% ( 1) 00:15:57.672 9.576 - 9.624: 98.4069% ( 1) 00:15:57.672 9.624 - 9.671: 98.4221% ( 2) 00:15:57.672 9.719 - 9.766: 98.4297% ( 1) 00:15:57.672 9.861 - 9.908: 98.4373% ( 1) 00:15:57.672 9.956 - 10.003: 98.4524% ( 2) 00:15:57.672 10.050 - 10.098: 98.4676% ( 2) 00:15:57.672 10.145 - 10.193: 98.4904% ( 3) 00:15:57.672 10.193 - 10.240: 98.4980% ( 1) 00:15:57.672 10.240 - 10.287: 98.5055% ( 1) 00:15:57.672 10.287 - 10.335: 98.5207% ( 2) 00:15:57.672 10.335 - 10.382: 98.5283% ( 1) 00:15:57.672 10.477 - 10.524: 98.5359% ( 1) 00:15:57.672 10.619 - 10.667: 98.5435% ( 1) 00:15:57.672 10.667 - 10.714: 98.5511% ( 1) 00:15:57.672 10.714 - 10.761: 98.5586% ( 1) 00:15:57.672 10.809 - 10.856: 98.5662% ( 1) 00:15:57.672 10.856 - 10.904: 98.5738% ( 1) 00:15:57.672 10.904 - 10.951: 98.5814% ( 1) 00:15:57.672 10.999 - 11.046: 98.5890% ( 1) 00:15:57.672 11.093 - 11.141: 98.5966% ( 1) 00:15:57.672 11.141 - 11.188: 98.6042% ( 1) 00:15:57.672 11.283 - 11.330: 98.6193% ( 2) 00:15:57.672 11.378 - 11.425: 98.6269% ( 1) 00:15:57.672 11.757 - 11.804: 98.6421% ( 2) 00:15:57.672 11.804 - 11.852: 98.6497% ( 1) 00:15:57.672 11.899 - 11.947: 98.6648% ( 2) 00:15:57.672 12.089 - 12.136: 98.6724% ( 1) 00:15:57.672 12.136 - 12.231: 98.7028% ( 4) 00:15:57.672 12.231 - 12.326: 98.7179% ( 2) 00:15:57.672 12.326 - 12.421: 98.7331% ( 2) 00:15:57.672 12.421 - 12.516: 98.7483% ( 2) 00:15:57.672 12.516 - 12.610: 98.7559% ( 1) 00:15:57.672 12.705 - 12.800: 98.7635% ( 1) 00:15:57.672 12.800 - 12.895: 98.7711% ( 1) 00:15:57.672 12.895 - 12.990: 98.7786% ( 1) 00:15:57.672 12.990 - 13.084: 98.7938% ( 2) 00:15:57.672 13.084 - 13.179: 98.8090% ( 2) 00:15:57.672 13.274 - 13.369: 98.8166% ( 1) 00:15:57.672 13.653 - 13.748: 98.8242% ( 1) 00:15:57.672 13.843 - 13.938: 98.8317% ( 1) 00:15:57.672 13.938 - 14.033: 98.8393% ( 1) 00:15:57.672 14.222 - 14.317: 98.8469% ( 1) 00:15:57.672 14.317 - 14.412: 98.8545% ( 1) 00:15:57.672 14.696 - 14.791: 98.8621% ( 1) 00:15:57.672 14.791 - 14.886: 98.8697% ( 1) 00:15:57.672 14.886 - 14.981: 98.8773% ( 1) 00:15:57.672 14.981 - 15.076: 98.8848% ( 1) 00:15:57.672 15.076 - 15.170: 98.8924% ( 1) 00:15:57.672 15.834 - 15.929: 98.9000% ( 1) 00:15:57.672 16.119 - 16.213: 98.9076% ( 1) 00:15:57.672 17.067 - 17.161: 98.9228% ( 2) 00:15:57.672 17.161 - 17.256: 98.9304% ( 1) 00:15:57.672 17.256 - 17.351: 98.9455% ( 2) 00:15:57.672 17.351 - 17.446: 98.9759% ( 4) 00:15:57.672 17.446 - 17.541: 99.0214% ( 6) 00:15:57.672 17.636 - 17.730: 99.0821% ( 8) 00:15:57.672 17.730 - 17.825: 99.1124% ( 4) 00:15:57.672 17.825 - 17.920: 99.1883% ( 10) 00:15:57.672 17.920 - 18.015: 99.3248% ( 18) 00:15:57.672 18.015 - 18.110: 99.3476% ( 3) 00:15:57.672 18.110 - 18.204: 99.4083% ( 8) 00:15:57.672 18.204 - 18.299: 99.4614% ( 7) 00:15:57.672 18.299 - 18.394: 99.5221% ( 8) 00:15:57.672 18.394 - 18.489: 99.6055% ( 11) 00:15:57.672 18.489 - 18.584: 99.6510% ( 6) 00:15:57.672 18.584 - 18.679: 99.6662% ( 2) 00:15:57.672 18.679 - 18.773: 99.7041% ( 5) 00:15:57.672 18.773 - 18.868: 99.7269% ( 3) 00:15:57.672 18.868 - 18.963: 99.7345% ( 1) 00:15:57.672 18.963 - 19.058: 99.7497% ( 2) 00:15:57.672 19.058 - 19.153: 99.7876% ( 5) 00:15:57.672 19.153 - 19.247: 99.7952% ( 1) 00:15:57.672 19.247 - 19.342: 99.8103% ( 2) 00:15:57.672 19.437 - 19.532: 99.8179% ( 1) 00:15:57.672 19.627 - 19.721: 99.8331% ( 2) 00:15:57.672 19.911 - 20.006: 99.8407% ( 1) 00:15:57.672 20.764 - 20.859: 99.8483% ( 1) 00:15:57.672 21.049 - 21.144: 99.8559% ( 1) 00:15:57.672 22.092 - 22.187: 99.8635% ( 1) 00:15:57.672 22.471 - 22.566: 99.8710% ( 1) 00:15:57.672 23.988 - 24.083: 99.8786% ( 1) 00:15:57.672 24.083 - 24.178: 99.8862% ( 1) 00:15:57.672 24.652 - 24.841: 99.8938% ( 1) 00:15:57.672 25.031 - 25.221: 99.9014% ( 1) 00:15:57.672 25.221 - 25.410: 99.9090% ( 1) 00:15:57.672 28.824 - 29.013: 99.9166% ( 1) 00:15:57.672 29.961 - 30.151: 99.9241% ( 1) 00:15:57.672 43.425 - 43.615: 99.9317% ( 1) 00:15:57.672 3980.705 - 4004.978: 99.9848% ( 7) 00:15:57.672 4004.978 - 4029.250: 100.0000% ( 2) 00:15:57.672 00:15:57.672 Complete histogram 00:15:57.672 ================== 00:15:57.672 Range in us Cumulative Count 00:15:57.672 2.062 - 2.074: 7.1309% ( 940) 00:15:57.672 2.074 - 2.086: 27.7575% ( 2719) 00:15:57.672 2.086 - 2.098: 29.6237% ( 246) 00:15:57.672 2.098 - 2.110: 45.4787% ( 2090) 00:15:57.672 2.110 - 2.121: 56.9261% ( 1509) 00:15:57.672 2.121 - 2.133: 58.3220% ( 184) 00:15:57.672 2.133 - 2.145: 64.9826% ( 878) 00:15:57.672 2.145 - 2.157: 70.5508% ( 734) 00:15:57.672 2.157 - 2.169: 71.8631% ( 173) 00:15:57.672 2.169 - 2.181: 78.6527% ( 895) 00:15:57.672 2.181 - 2.193: 81.8009% ( 415) 00:15:57.672 2.193 - 2.204: 82.4002% ( 79) 00:15:57.672 2.204 - 2.216: 84.1147% ( 226) 00:15:57.672 2.216 - 2.228: 86.4209% ( 304) 00:15:57.672 2.228 - 2.240: 87.8698% ( 191) 00:15:57.672 2.240 - 2.252: 89.5767% ( 225) 00:15:57.672 2.252 - 2.264: 90.8056% ( 162) 00:15:57.672 2.264 - 2.276: 91.1546% ( 46) 00:15:57.672 2.276 - 2.287: 91.5263% ( 49) 00:15:57.672 2.287 - 2.299: 91.9360% ( 54) 00:15:57.672 2.299 - 2.311: 92.4367% ( 66) 00:15:57.672 2.311 - 2.323: 92.6263% ( 25) 00:15:57.672 2.323 - 2.335: 92.7477% ( 16) 00:15:57.672 2.335 - 2.347: 92.8160% ( 9) 00:15:57.672 2.347 - 2.359: 92.8615% ( 6) 00:15:57.672 2.359 - 2.370: 92.9829% ( 16) 00:15:57.672 2.370 - 2.382: 93.1042% ( 16) 00:15:57.672 2.382 - 2.394: 93.2787% ( 23) 00:15:57.672 2.394 - 2.406: 93.4760% ( 26) 00:15:57.672 2.406 - 2.418: 93.6580% ( 24) 00:15:57.672 2.418 - 2.430: 93.8401% ( 24) 00:15:57.672 2.430 - 2.441: 94.0070% ( 22) 00:15:57.672 2.441 - 2.453: 94.1966% ( 25) 00:15:57.672 2.453 - 2.465: 94.3256% ( 17) 00:15:57.672 2.465 - 2.477: 94.4925% ( 22) 00:15:57.672 2.477 - 2.489: 94.6063% ( 15) 00:15:57.672 2.489 - 2.501: 94.7504% ( 19) 00:15:57.672 2.501 - 2.513: 94.8870% ( 18) 00:15:57.672 2.513 - 2.524: 94.9477% ( 8) 00:15:57.672 2.524 - 2.536: 95.0539% ( 14) 00:15:57.672 2.536 - 2.548: 95.1752% ( 16) 00:15:57.672 2.548 - 2.560: 95.2966% ( 16) 00:15:57.672 2.560 - 2.572: 95.4180% ( 16) 00:15:57.672 2.572 - 2.584: 95.5090% ( 12) 00:15:57.672 2.584 - 2.596: 95.5242% ( 2) 00:15:57.672 2.596 - 2.607: 95.6076% ( 11) 00:15:57.672 2.607 - 2.619: 95.6759% ( 9) 00:15:57.672 2.619 - 2.631: 95.7745% ( 13) 00:15:57.672 2.631 - 2.643: 95.8656% ( 12) 00:15:57.672 2.643 - 2.655: 95.8959% ( 4) 00:15:57.672 2.655 - 2.667: 95.9490% ( 7) 00:15:57.672 2.667 - 2.679: 96.0476% ( 13) 00:15:57.673 2.679 - 2.690: 96.1007% ( 7) 00:15:57.673 2.690 - 2.702: 96.1918% ( 12) 00:15:57.673 2.702 - 2.714: 96.2525% ( 8) 00:15:57.673 2.714 - 2.726: 96.3511% ( 13) 00:15:57.673 2.726 - 2.738: 96.3814% ( 4) 00:15:57.673 2.738 - 2.750: 96.4194% ( 5) 00:15:57.673 2.750 - 2.761: 96.4649% ( 6) 00:15:57.673 2.761 - 2.773: 96.5180% ( 7) 00:15:57.673 2.773 - 2.785: 96.5407% ( 3) 00:15:57.673 2.785 - 2.797: 96.5711% ( 4) 00:15:57.673 2.797 - 2.809: 96.6394% ( 9) 00:15:57.673 2.809 - 2.821: 96.6697% ( 4) 00:15:57.673 2.821 - 2.833: 96.7076% ( 5) 00:15:57.673 2.833 - 2.844: 96.7456% ( 5) 00:15:57.673 2.844 - 2.856: 96.7759% ( 4) 00:15:57.673 2.856 - 2.868: 96.8214% ( 6) 00:15:57.673 2.868 - 2.880: 96.8594% ( 5) 00:15:57.673 2.880 - 2.892: 96.8745% ( 2) 00:15:57.673 2.892 - 2.904: 96.9276% ( 7) 00:15:57.673 2.904 - 2.916: 96.9580% ( 4) 00:15:57.673 2.916 - 2.927: 96.9959% ( 5) 00:15:57.673 2.927 - 2.939: 97.0262% ( 4) 00:15:57.673 2.939 - 2.951: 97.0642% ( 5) 00:15:57.673 2.951 - 2.963: 97.1097% ( 6) 00:15:57.673 2.963 - 2.975: 97.1173% ( 1) 00:15:57.673 2.975 - 2.987: 97.1476% ( 4) 00:15:57.673 2.987 - 2.999: 97.1931% ( 6) 00:15:57.673 2.999 - 3.010: 97.2462% ( 7) 00:15:57.673 3.010 - 3.022: 97.2842% ( 5) 00:15:57.673 3.022 - 3.034: 97.3449% ( 8) 00:15:57.673 3.034 - 3.058: 97.3904% ( 6) 00:15:57.673 3.058 - 3.081: 97.4890% ( 13) 00:15:57.673 3.081 - 3.105: 97.5800% ( 12) 00:15:57.673 3.105 - 3.129: 97.6787% ( 13) 00:15:57.673 3.129 - 3.153: 97.7393% ( 8) 00:15:57.673 3.153 - 3.176: 97.8304% ( 12) 00:15:57.673 3.176 - 3.200: 97.8607% ( 4) 00:15:57.673 3.200 - 3.224: 97.8911% ( 4) 00:15:57.673 3.224 - 3.247: 97.9593% ( 9) 00:15:57.673 3.247 - 3.271: 97.9669% ( 1) 00:15:57.673 3.271 - 3.295: 97.9973% ( 4) 00:15:57.673 3.295 - 3.319: 98.0276% ( 4) 00:15:57.673 3.319 - 3.342: 98.0959% ( 9) 00:15:57.673 3.342 - 3.366: 98.1414% ( 6) 00:15:57.673 3.366 - 3.390: 98.1717% ( 4) 00:15:57.673 3.390 - 3.413: 98.1869% ( 2) 00:15:57.673 3.413 - 3.437: 98.2097% ( 3) 00:15:57.673 3.437 - 3.461: 98.2324% ( 3) 00:15:57.673 3.461 - 3.484: 98.2628% ( 4) 00:15:57.673 3.484 - 3.508: 98.2855% ( 3) 00:15:57.673 3.508 - 3.532: 98.3083% ( 3) 00:15:57.673 3.579 - 3.603: 98.3235% ( 2) 00:15:57.673 3.627 - 3.650: 98.3311% ( 1) 00:15:57.673 3.650 - 3.674: 98.3386% ( 1) 00:15:57.673 3.674 - 3.698: 98.3538% ( 2) 00:15:57.673 3.745 - 3.769: 98.3614% ( 1) 00:15:57.673 3.769 - 3.793: 98.3766%[2024-11-06 08:52:10.809375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:57.673 ( 2) 00:15:57.673 3.816 - 3.840: 98.3917% ( 2) 00:15:57.673 3.840 - 3.864: 98.3993% ( 1) 00:15:57.673 3.864 - 3.887: 98.4145% ( 2) 00:15:57.673 3.935 - 3.959: 98.4221% ( 1) 00:15:57.673 3.959 - 3.982: 98.4448% ( 3) 00:15:57.673 3.982 - 4.006: 98.4524% ( 1) 00:15:57.673 4.101 - 4.124: 98.4600% ( 1) 00:15:57.673 4.124 - 4.148: 98.4752% ( 2) 00:15:57.673 4.172 - 4.196: 98.4828% ( 1) 00:15:57.673 4.361 - 4.385: 98.4980% ( 2) 00:15:57.673 4.385 - 4.409: 98.5055% ( 1) 00:15:57.673 4.480 - 4.504: 98.5131% ( 1) 00:15:57.673 4.575 - 4.599: 98.5207% ( 1) 00:15:57.673 5.404 - 5.428: 98.5283% ( 1) 00:15:57.673 5.618 - 5.641: 98.5359% ( 1) 00:15:57.673 5.713 - 5.736: 98.5435% ( 1) 00:15:57.673 6.021 - 6.044: 98.5511% ( 1) 00:15:57.673 6.305 - 6.353: 98.5662% ( 2) 00:15:57.673 6.542 - 6.590: 98.5738% ( 1) 00:15:57.673 6.921 - 6.969: 98.5814% ( 1) 00:15:57.673 7.538 - 7.585: 98.5966% ( 2) 00:15:57.673 7.633 - 7.680: 98.6042% ( 1) 00:15:57.673 7.870 - 7.917: 98.6117% ( 1) 00:15:57.673 8.107 - 8.154: 98.6193% ( 1) 00:15:57.673 8.249 - 8.296: 98.6269% ( 1) 00:15:57.673 8.486 - 8.533: 98.6345% ( 1) 00:15:57.673 8.723 - 8.770: 98.6421% ( 1) 00:15:57.673 9.292 - 9.339: 98.6497% ( 1) 00:15:57.673 9.339 - 9.387: 98.6573% ( 1) 00:15:57.673 10.098 - 10.145: 98.6876% ( 4) 00:15:57.673 10.477 - 10.524: 98.6952% ( 1) 00:15:57.673 10.572 - 10.619: 98.7028% ( 1) 00:15:57.673 10.714 - 10.761: 98.7104% ( 1) 00:15:57.673 10.856 - 10.904: 98.7179% ( 1) 00:15:57.673 11.141 - 11.188: 98.7255% ( 1) 00:15:57.673 11.283 - 11.330: 98.7331% ( 1) 00:15:57.673 11.425 - 11.473: 98.7407% ( 1) 00:15:57.673 12.089 - 12.136: 98.7483% ( 1) 00:15:57.673 14.033 - 14.127: 98.7559% ( 1) 00:15:57.673 14.791 - 14.886: 98.7635% ( 1) 00:15:57.673 15.265 - 15.360: 98.7711% ( 1) 00:15:57.673 15.360 - 15.455: 98.7786% ( 1) 00:15:57.673 15.455 - 15.550: 98.7862% ( 1) 00:15:57.673 15.550 - 15.644: 98.7938% ( 1) 00:15:57.673 15.644 - 15.739: 98.8014% ( 1) 00:15:57.673 15.739 - 15.834: 98.8166% ( 2) 00:15:57.673 15.834 - 15.929: 98.8242% ( 1) 00:15:57.673 15.929 - 16.024: 98.8469% ( 3) 00:15:57.673 16.024 - 16.119: 98.8773% ( 4) 00:15:57.673 16.119 - 16.213: 98.8848% ( 1) 00:15:57.673 16.213 - 16.308: 98.9304% ( 6) 00:15:57.673 16.308 - 16.403: 98.9607% ( 4) 00:15:57.673 16.403 - 16.498: 99.0062% ( 6) 00:15:57.673 16.498 - 16.593: 99.0366% ( 4) 00:15:57.673 16.593 - 16.687: 99.0821% ( 6) 00:15:57.673 16.687 - 16.782: 99.1504% ( 9) 00:15:57.673 16.782 - 16.877: 99.1731% ( 3) 00:15:57.673 16.877 - 16.972: 99.1959% ( 3) 00:15:57.673 17.067 - 17.161: 99.2035% ( 1) 00:15:57.673 17.161 - 17.256: 99.2186% ( 2) 00:15:57.673 17.256 - 17.351: 99.2338% ( 2) 00:15:57.673 17.541 - 17.636: 99.2490% ( 2) 00:15:57.673 17.636 - 17.730: 99.2566% ( 1) 00:15:57.673 17.730 - 17.825: 99.2793% ( 3) 00:15:57.673 18.015 - 18.110: 99.2869% ( 1) 00:15:57.673 18.204 - 18.299: 99.2945% ( 1) 00:15:57.673 19.153 - 19.247: 99.3021% ( 1) 00:15:57.673 25.410 - 25.600: 99.3097% ( 1) 00:15:57.673 1037.653 - 1043.721: 99.3173% ( 1) 00:15:57.673 3980.705 - 4004.978: 99.8559% ( 71) 00:15:57.673 4004.978 - 4029.250: 99.9924% ( 18) 00:15:57.673 5995.330 - 6019.603: 100.0000% ( 1) 00:15:57.673 00:15:57.673 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:57.673 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:57.673 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:57.673 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:57.673 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:57.931 [ 00:15:57.931 { 00:15:57.931 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:57.931 "subtype": "Discovery", 00:15:57.931 "listen_addresses": [], 00:15:57.931 "allow_any_host": true, 00:15:57.931 "hosts": [] 00:15:57.931 }, 00:15:57.931 { 00:15:57.931 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:57.931 "subtype": "NVMe", 00:15:57.931 "listen_addresses": [ 00:15:57.931 { 00:15:57.931 "trtype": "VFIOUSER", 00:15:57.931 "adrfam": "IPv4", 00:15:57.931 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:57.931 "trsvcid": "0" 00:15:57.931 } 00:15:57.931 ], 00:15:57.931 "allow_any_host": true, 00:15:57.931 "hosts": [], 00:15:57.931 "serial_number": "SPDK1", 00:15:57.931 "model_number": "SPDK bdev Controller", 00:15:57.931 "max_namespaces": 32, 00:15:57.931 "min_cntlid": 1, 00:15:57.931 "max_cntlid": 65519, 00:15:57.931 "namespaces": [ 00:15:57.931 { 00:15:57.931 "nsid": 1, 00:15:57.931 "bdev_name": "Malloc1", 00:15:57.931 "name": "Malloc1", 00:15:57.931 "nguid": "C066DC4F8866495A878B5255C0C6EBA6", 00:15:57.931 "uuid": "c066dc4f-8866-495a-878b-5255c0c6eba6" 00:15:57.931 } 00:15:57.931 ] 00:15:57.931 }, 00:15:57.931 { 00:15:57.931 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:57.931 "subtype": "NVMe", 00:15:57.931 "listen_addresses": [ 00:15:57.931 { 00:15:57.931 "trtype": "VFIOUSER", 00:15:57.931 "adrfam": "IPv4", 00:15:57.931 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:57.931 "trsvcid": "0" 00:15:57.931 } 00:15:57.931 ], 00:15:57.931 "allow_any_host": true, 00:15:57.931 "hosts": [], 00:15:57.931 "serial_number": "SPDK2", 00:15:57.931 "model_number": "SPDK bdev Controller", 00:15:57.931 "max_namespaces": 32, 00:15:57.931 "min_cntlid": 1, 00:15:57.931 "max_cntlid": 65519, 00:15:57.931 "namespaces": [ 00:15:57.931 { 00:15:57.931 "nsid": 1, 00:15:57.931 "bdev_name": "Malloc2", 00:15:57.931 "name": "Malloc2", 00:15:57.931 "nguid": "6732BDCEC82E4D2A923E85E4275B5FF1", 00:15:57.931 "uuid": "6732bdce-c82e-4d2a-923e-85e4275b5ff1" 00:15:57.931 } 00:15:57.931 ] 00:15:57.931 } 00:15:57.931 ] 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=805032 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:57.931 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:58.189 [2024-11-06 08:52:11.358903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:58.446 Malloc3 00:15:58.446 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:58.703 [2024-11-06 08:52:11.769996] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:58.703 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:58.703 Asynchronous Event Request test 00:15:58.703 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.703 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:58.703 Registering asynchronous event callbacks... 00:15:58.703 Starting namespace attribute notice tests for all controllers... 00:15:58.703 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:58.703 aer_cb - Changed Namespace 00:15:58.703 Cleaning up... 00:15:58.962 [ 00:15:58.962 { 00:15:58.962 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:58.962 "subtype": "Discovery", 00:15:58.962 "listen_addresses": [], 00:15:58.962 "allow_any_host": true, 00:15:58.962 "hosts": [] 00:15:58.962 }, 00:15:58.962 { 00:15:58.962 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:58.962 "subtype": "NVMe", 00:15:58.962 "listen_addresses": [ 00:15:58.962 { 00:15:58.962 "trtype": "VFIOUSER", 00:15:58.962 "adrfam": "IPv4", 00:15:58.962 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:58.962 "trsvcid": "0" 00:15:58.962 } 00:15:58.962 ], 00:15:58.962 "allow_any_host": true, 00:15:58.962 "hosts": [], 00:15:58.962 "serial_number": "SPDK1", 00:15:58.962 "model_number": "SPDK bdev Controller", 00:15:58.962 "max_namespaces": 32, 00:15:58.962 "min_cntlid": 1, 00:15:58.962 "max_cntlid": 65519, 00:15:58.962 "namespaces": [ 00:15:58.962 { 00:15:58.962 "nsid": 1, 00:15:58.962 "bdev_name": "Malloc1", 00:15:58.962 "name": "Malloc1", 00:15:58.962 "nguid": "C066DC4F8866495A878B5255C0C6EBA6", 00:15:58.962 "uuid": "c066dc4f-8866-495a-878b-5255c0c6eba6" 00:15:58.962 }, 00:15:58.962 { 00:15:58.962 "nsid": 2, 00:15:58.962 "bdev_name": "Malloc3", 00:15:58.962 "name": "Malloc3", 00:15:58.962 "nguid": "CB35DE5B04F9482280CCBF07063C1D21", 00:15:58.962 "uuid": "cb35de5b-04f9-4822-80cc-bf07063c1d21" 00:15:58.962 } 00:15:58.962 ] 00:15:58.962 }, 00:15:58.962 { 00:15:58.962 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:58.962 "subtype": "NVMe", 00:15:58.962 "listen_addresses": [ 00:15:58.962 { 00:15:58.962 "trtype": "VFIOUSER", 00:15:58.962 "adrfam": "IPv4", 00:15:58.962 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:58.962 "trsvcid": "0" 00:15:58.962 } 00:15:58.962 ], 00:15:58.962 "allow_any_host": true, 00:15:58.962 "hosts": [], 00:15:58.962 "serial_number": "SPDK2", 00:15:58.962 "model_number": "SPDK bdev Controller", 00:15:58.962 "max_namespaces": 32, 00:15:58.962 "min_cntlid": 1, 00:15:58.962 "max_cntlid": 65519, 00:15:58.962 "namespaces": [ 00:15:58.962 { 00:15:58.962 "nsid": 1, 00:15:58.962 "bdev_name": "Malloc2", 00:15:58.962 "name": "Malloc2", 00:15:58.962 "nguid": "6732BDCEC82E4D2A923E85E4275B5FF1", 00:15:58.962 "uuid": "6732bdce-c82e-4d2a-923e-85e4275b5ff1" 00:15:58.962 } 00:15:58.962 ] 00:15:58.962 } 00:15:58.962 ] 00:15:58.962 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 805032 00:15:58.962 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:58.962 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:58.962 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:58.962 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:58.962 [2024-11-06 08:52:12.080383] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:15:58.962 [2024-11-06 08:52:12.080425] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805060 ] 00:15:58.962 [2024-11-06 08:52:12.129602] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:58.962 [2024-11-06 08:52:12.138162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:58.962 [2024-11-06 08:52:12.138193] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd370d58000 00:15:58.962 [2024-11-06 08:52:12.139158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.962 [2024-11-06 08:52:12.140154] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.962 [2024-11-06 08:52:12.141159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.962 [2024-11-06 08:52:12.142166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:58.962 [2024-11-06 08:52:12.143175] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:58.962 [2024-11-06 08:52:12.144191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.962 [2024-11-06 08:52:12.145183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:58.962 [2024-11-06 08:52:12.146193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.962 [2024-11-06 08:52:12.147205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:58.962 [2024-11-06 08:52:12.147230] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd370d4d000 00:15:58.962 [2024-11-06 08:52:12.148347] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:58.962 [2024-11-06 08:52:12.163049] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:58.962 [2024-11-06 08:52:12.163085] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:58.962 [2024-11-06 08:52:12.165184] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:58.962 [2024-11-06 08:52:12.165235] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:58.962 [2024-11-06 08:52:12.165318] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:58.962 [2024-11-06 08:52:12.165341] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:58.962 [2024-11-06 08:52:12.165351] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:58.962 [2024-11-06 08:52:12.166189] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:58.962 [2024-11-06 08:52:12.166210] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:58.962 [2024-11-06 08:52:12.166222] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:58.962 [2024-11-06 08:52:12.170846] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:58.962 [2024-11-06 08:52:12.170880] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:58.962 [2024-11-06 08:52:12.170895] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:58.962 [2024-11-06 08:52:12.171231] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:58.962 [2024-11-06 08:52:12.171250] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:58.962 [2024-11-06 08:52:12.172246] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:58.962 [2024-11-06 08:52:12.172266] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:58.962 [2024-11-06 08:52:12.172276] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:58.962 [2024-11-06 08:52:12.172287] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:58.962 [2024-11-06 08:52:12.172397] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:58.962 [2024-11-06 08:52:12.172405] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:58.962 [2024-11-06 08:52:12.172413] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:58.962 [2024-11-06 08:52:12.173256] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:58.962 [2024-11-06 08:52:12.174256] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:58.962 [2024-11-06 08:52:12.175261] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:58.962 [2024-11-06 08:52:12.176257] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.963 [2024-11-06 08:52:12.176325] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:58.963 [2024-11-06 08:52:12.177274] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:58.963 [2024-11-06 08:52:12.177294] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:58.963 [2024-11-06 08:52:12.177303] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.177327] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:58.963 [2024-11-06 08:52:12.177342] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.177363] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:58.963 [2024-11-06 08:52:12.177374] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.963 [2024-11-06 08:52:12.177381] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.963 [2024-11-06 08:52:12.177398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.963 [2024-11-06 08:52:12.181850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:58.963 [2024-11-06 08:52:12.181873] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:58.963 [2024-11-06 08:52:12.181882] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:58.963 [2024-11-06 08:52:12.181889] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:58.963 [2024-11-06 08:52:12.181907] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:58.963 [2024-11-06 08:52:12.181915] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:58.963 [2024-11-06 08:52:12.181922] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:58.963 [2024-11-06 08:52:12.181930] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.181942] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.181957] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:58.963 [2024-11-06 08:52:12.189844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:58.963 [2024-11-06 08:52:12.189882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.963 [2024-11-06 08:52:12.189899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.963 [2024-11-06 08:52:12.189911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.963 [2024-11-06 08:52:12.189923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.963 [2024-11-06 08:52:12.189932] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.189947] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.189962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:58.963 [2024-11-06 08:52:12.197843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:58.963 [2024-11-06 08:52:12.197866] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:58.963 [2024-11-06 08:52:12.197876] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.197895] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.197905] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.197918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:58.963 [2024-11-06 08:52:12.205841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:58.963 [2024-11-06 08:52:12.205916] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.205933] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.205947] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:58.963 [2024-11-06 08:52:12.205955] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:58.963 [2024-11-06 08:52:12.205962] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.963 [2024-11-06 08:52:12.205971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:58.963 [2024-11-06 08:52:12.213841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:58.963 [2024-11-06 08:52:12.213865] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:58.963 [2024-11-06 08:52:12.213893] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.213909] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.213922] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:58.963 [2024-11-06 08:52:12.213930] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.963 [2024-11-06 08:52:12.213936] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.963 [2024-11-06 08:52:12.213945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.963 [2024-11-06 08:52:12.221844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:58.963 [2024-11-06 08:52:12.221872] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.221888] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.221905] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:58.963 [2024-11-06 08:52:12.221914] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.963 [2024-11-06 08:52:12.221920] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.963 [2024-11-06 08:52:12.221930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.963 [2024-11-06 08:52:12.229843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:58.963 [2024-11-06 08:52:12.229864] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.229877] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.229891] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.229902] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.229911] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.229920] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.229928] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:58.963 [2024-11-06 08:52:12.229936] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:58.963 [2024-11-06 08:52:12.229944] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:58.963 [2024-11-06 08:52:12.229969] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:58.963 [2024-11-06 08:52:12.237855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:58.963 [2024-11-06 08:52:12.237883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:58.963 [2024-11-06 08:52:12.245842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:58.963 [2024-11-06 08:52:12.245868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:59.225 [2024-11-06 08:52:12.253844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:59.225 [2024-11-06 08:52:12.253869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:59.225 [2024-11-06 08:52:12.261844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:59.225 [2024-11-06 08:52:12.261876] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:59.225 [2024-11-06 08:52:12.261887] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:59.225 [2024-11-06 08:52:12.261893] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:59.225 [2024-11-06 08:52:12.261899] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:59.225 [2024-11-06 08:52:12.261908] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:59.225 [2024-11-06 08:52:12.261919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:59.225 [2024-11-06 08:52:12.261931] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:59.225 [2024-11-06 08:52:12.261939] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:59.225 [2024-11-06 08:52:12.261945] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:59.225 [2024-11-06 08:52:12.261954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:59.225 [2024-11-06 08:52:12.261965] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:59.225 [2024-11-06 08:52:12.261973] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:59.225 [2024-11-06 08:52:12.261979] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:59.225 [2024-11-06 08:52:12.261987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:59.225 [2024-11-06 08:52:12.262003] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:59.225 [2024-11-06 08:52:12.262012] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:59.225 [2024-11-06 08:52:12.262018] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:59.225 [2024-11-06 08:52:12.262027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:59.225 [2024-11-06 08:52:12.269843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:59.225 [2024-11-06 08:52:12.269871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:59.225 [2024-11-06 08:52:12.269889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:59.225 [2024-11-06 08:52:12.269901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:59.225 ===================================================== 00:15:59.225 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:59.225 ===================================================== 00:15:59.225 Controller Capabilities/Features 00:15:59.225 ================================ 00:15:59.225 Vendor ID: 4e58 00:15:59.225 Subsystem Vendor ID: 4e58 00:15:59.225 Serial Number: SPDK2 00:15:59.225 Model Number: SPDK bdev Controller 00:15:59.225 Firmware Version: 25.01 00:15:59.225 Recommended Arb Burst: 6 00:15:59.225 IEEE OUI Identifier: 8d 6b 50 00:15:59.225 Multi-path I/O 00:15:59.225 May have multiple subsystem ports: Yes 00:15:59.225 May have multiple controllers: Yes 00:15:59.225 Associated with SR-IOV VF: No 00:15:59.225 Max Data Transfer Size: 131072 00:15:59.225 Max Number of Namespaces: 32 00:15:59.225 Max Number of I/O Queues: 127 00:15:59.225 NVMe Specification Version (VS): 1.3 00:15:59.225 NVMe Specification Version (Identify): 1.3 00:15:59.225 Maximum Queue Entries: 256 00:15:59.225 Contiguous Queues Required: Yes 00:15:59.225 Arbitration Mechanisms Supported 00:15:59.225 Weighted Round Robin: Not Supported 00:15:59.225 Vendor Specific: Not Supported 00:15:59.225 Reset Timeout: 15000 ms 00:15:59.225 Doorbell Stride: 4 bytes 00:15:59.225 NVM Subsystem Reset: Not Supported 00:15:59.225 Command Sets Supported 00:15:59.225 NVM Command Set: Supported 00:15:59.225 Boot Partition: Not Supported 00:15:59.225 Memory Page Size Minimum: 4096 bytes 00:15:59.225 Memory Page Size Maximum: 4096 bytes 00:15:59.225 Persistent Memory Region: Not Supported 00:15:59.225 Optional Asynchronous Events Supported 00:15:59.225 Namespace Attribute Notices: Supported 00:15:59.225 Firmware Activation Notices: Not Supported 00:15:59.225 ANA Change Notices: Not Supported 00:15:59.225 PLE Aggregate Log Change Notices: Not Supported 00:15:59.225 LBA Status Info Alert Notices: Not Supported 00:15:59.225 EGE Aggregate Log Change Notices: Not Supported 00:15:59.225 Normal NVM Subsystem Shutdown event: Not Supported 00:15:59.225 Zone Descriptor Change Notices: Not Supported 00:15:59.225 Discovery Log Change Notices: Not Supported 00:15:59.225 Controller Attributes 00:15:59.225 128-bit Host Identifier: Supported 00:15:59.225 Non-Operational Permissive Mode: Not Supported 00:15:59.225 NVM Sets: Not Supported 00:15:59.225 Read Recovery Levels: Not Supported 00:15:59.225 Endurance Groups: Not Supported 00:15:59.225 Predictable Latency Mode: Not Supported 00:15:59.225 Traffic Based Keep ALive: Not Supported 00:15:59.225 Namespace Granularity: Not Supported 00:15:59.225 SQ Associations: Not Supported 00:15:59.225 UUID List: Not Supported 00:15:59.225 Multi-Domain Subsystem: Not Supported 00:15:59.225 Fixed Capacity Management: Not Supported 00:15:59.225 Variable Capacity Management: Not Supported 00:15:59.225 Delete Endurance Group: Not Supported 00:15:59.225 Delete NVM Set: Not Supported 00:15:59.225 Extended LBA Formats Supported: Not Supported 00:15:59.225 Flexible Data Placement Supported: Not Supported 00:15:59.225 00:15:59.225 Controller Memory Buffer Support 00:15:59.225 ================================ 00:15:59.225 Supported: No 00:15:59.225 00:15:59.225 Persistent Memory Region Support 00:15:59.225 ================================ 00:15:59.225 Supported: No 00:15:59.225 00:15:59.225 Admin Command Set Attributes 00:15:59.225 ============================ 00:15:59.225 Security Send/Receive: Not Supported 00:15:59.225 Format NVM: Not Supported 00:15:59.225 Firmware Activate/Download: Not Supported 00:15:59.225 Namespace Management: Not Supported 00:15:59.225 Device Self-Test: Not Supported 00:15:59.225 Directives: Not Supported 00:15:59.225 NVMe-MI: Not Supported 00:15:59.225 Virtualization Management: Not Supported 00:15:59.225 Doorbell Buffer Config: Not Supported 00:15:59.225 Get LBA Status Capability: Not Supported 00:15:59.225 Command & Feature Lockdown Capability: Not Supported 00:15:59.225 Abort Command Limit: 4 00:15:59.225 Async Event Request Limit: 4 00:15:59.225 Number of Firmware Slots: N/A 00:15:59.225 Firmware Slot 1 Read-Only: N/A 00:15:59.225 Firmware Activation Without Reset: N/A 00:15:59.225 Multiple Update Detection Support: N/A 00:15:59.225 Firmware Update Granularity: No Information Provided 00:15:59.225 Per-Namespace SMART Log: No 00:15:59.225 Asymmetric Namespace Access Log Page: Not Supported 00:15:59.225 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:59.225 Command Effects Log Page: Supported 00:15:59.225 Get Log Page Extended Data: Supported 00:15:59.225 Telemetry Log Pages: Not Supported 00:15:59.225 Persistent Event Log Pages: Not Supported 00:15:59.225 Supported Log Pages Log Page: May Support 00:15:59.225 Commands Supported & Effects Log Page: Not Supported 00:15:59.225 Feature Identifiers & Effects Log Page:May Support 00:15:59.225 NVMe-MI Commands & Effects Log Page: May Support 00:15:59.226 Data Area 4 for Telemetry Log: Not Supported 00:15:59.226 Error Log Page Entries Supported: 128 00:15:59.226 Keep Alive: Supported 00:15:59.226 Keep Alive Granularity: 10000 ms 00:15:59.226 00:15:59.226 NVM Command Set Attributes 00:15:59.226 ========================== 00:15:59.226 Submission Queue Entry Size 00:15:59.226 Max: 64 00:15:59.226 Min: 64 00:15:59.226 Completion Queue Entry Size 00:15:59.226 Max: 16 00:15:59.226 Min: 16 00:15:59.226 Number of Namespaces: 32 00:15:59.226 Compare Command: Supported 00:15:59.226 Write Uncorrectable Command: Not Supported 00:15:59.226 Dataset Management Command: Supported 00:15:59.226 Write Zeroes Command: Supported 00:15:59.226 Set Features Save Field: Not Supported 00:15:59.226 Reservations: Not Supported 00:15:59.226 Timestamp: Not Supported 00:15:59.226 Copy: Supported 00:15:59.226 Volatile Write Cache: Present 00:15:59.226 Atomic Write Unit (Normal): 1 00:15:59.226 Atomic Write Unit (PFail): 1 00:15:59.226 Atomic Compare & Write Unit: 1 00:15:59.226 Fused Compare & Write: Supported 00:15:59.226 Scatter-Gather List 00:15:59.226 SGL Command Set: Supported (Dword aligned) 00:15:59.226 SGL Keyed: Not Supported 00:15:59.226 SGL Bit Bucket Descriptor: Not Supported 00:15:59.226 SGL Metadata Pointer: Not Supported 00:15:59.226 Oversized SGL: Not Supported 00:15:59.226 SGL Metadata Address: Not Supported 00:15:59.226 SGL Offset: Not Supported 00:15:59.226 Transport SGL Data Block: Not Supported 00:15:59.226 Replay Protected Memory Block: Not Supported 00:15:59.226 00:15:59.226 Firmware Slot Information 00:15:59.226 ========================= 00:15:59.226 Active slot: 1 00:15:59.226 Slot 1 Firmware Revision: 25.01 00:15:59.226 00:15:59.226 00:15:59.226 Commands Supported and Effects 00:15:59.226 ============================== 00:15:59.226 Admin Commands 00:15:59.226 -------------- 00:15:59.226 Get Log Page (02h): Supported 00:15:59.226 Identify (06h): Supported 00:15:59.226 Abort (08h): Supported 00:15:59.226 Set Features (09h): Supported 00:15:59.226 Get Features (0Ah): Supported 00:15:59.226 Asynchronous Event Request (0Ch): Supported 00:15:59.226 Keep Alive (18h): Supported 00:15:59.226 I/O Commands 00:15:59.226 ------------ 00:15:59.226 Flush (00h): Supported LBA-Change 00:15:59.226 Write (01h): Supported LBA-Change 00:15:59.226 Read (02h): Supported 00:15:59.226 Compare (05h): Supported 00:15:59.226 Write Zeroes (08h): Supported LBA-Change 00:15:59.226 Dataset Management (09h): Supported LBA-Change 00:15:59.226 Copy (19h): Supported LBA-Change 00:15:59.226 00:15:59.226 Error Log 00:15:59.226 ========= 00:15:59.226 00:15:59.226 Arbitration 00:15:59.226 =========== 00:15:59.226 Arbitration Burst: 1 00:15:59.226 00:15:59.226 Power Management 00:15:59.226 ================ 00:15:59.226 Number of Power States: 1 00:15:59.226 Current Power State: Power State #0 00:15:59.226 Power State #0: 00:15:59.226 Max Power: 0.00 W 00:15:59.226 Non-Operational State: Operational 00:15:59.226 Entry Latency: Not Reported 00:15:59.226 Exit Latency: Not Reported 00:15:59.226 Relative Read Throughput: 0 00:15:59.226 Relative Read Latency: 0 00:15:59.226 Relative Write Throughput: 0 00:15:59.226 Relative Write Latency: 0 00:15:59.226 Idle Power: Not Reported 00:15:59.226 Active Power: Not Reported 00:15:59.226 Non-Operational Permissive Mode: Not Supported 00:15:59.226 00:15:59.226 Health Information 00:15:59.226 ================== 00:15:59.226 Critical Warnings: 00:15:59.226 Available Spare Space: OK 00:15:59.226 Temperature: OK 00:15:59.226 Device Reliability: OK 00:15:59.226 Read Only: No 00:15:59.226 Volatile Memory Backup: OK 00:15:59.226 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:59.226 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:59.226 Available Spare: 0% 00:15:59.226 Available Sp[2024-11-06 08:52:12.270027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:59.226 [2024-11-06 08:52:12.277845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:59.226 [2024-11-06 08:52:12.277900] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:59.226 [2024-11-06 08:52:12.277918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.226 [2024-11-06 08:52:12.277929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.226 [2024-11-06 08:52:12.277938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.226 [2024-11-06 08:52:12.277948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:59.226 [2024-11-06 08:52:12.278011] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:59.226 [2024-11-06 08:52:12.278032] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:59.226 [2024-11-06 08:52:12.279014] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:59.226 [2024-11-06 08:52:12.279086] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:59.226 [2024-11-06 08:52:12.279102] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:59.226 [2024-11-06 08:52:12.280031] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:59.226 [2024-11-06 08:52:12.280056] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:59.226 [2024-11-06 08:52:12.280108] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:59.226 [2024-11-06 08:52:12.282859] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:59.226 are Threshold: 0% 00:15:59.226 Life Percentage Used: 0% 00:15:59.226 Data Units Read: 0 00:15:59.226 Data Units Written: 0 00:15:59.226 Host Read Commands: 0 00:15:59.226 Host Write Commands: 0 00:15:59.226 Controller Busy Time: 0 minutes 00:15:59.226 Power Cycles: 0 00:15:59.226 Power On Hours: 0 hours 00:15:59.226 Unsafe Shutdowns: 0 00:15:59.226 Unrecoverable Media Errors: 0 00:15:59.226 Lifetime Error Log Entries: 0 00:15:59.226 Warning Temperature Time: 0 minutes 00:15:59.226 Critical Temperature Time: 0 minutes 00:15:59.226 00:15:59.226 Number of Queues 00:15:59.226 ================ 00:15:59.226 Number of I/O Submission Queues: 127 00:15:59.226 Number of I/O Completion Queues: 127 00:15:59.226 00:15:59.226 Active Namespaces 00:15:59.226 ================= 00:15:59.226 Namespace ID:1 00:15:59.226 Error Recovery Timeout: Unlimited 00:15:59.226 Command Set Identifier: NVM (00h) 00:15:59.226 Deallocate: Supported 00:15:59.226 Deallocated/Unwritten Error: Not Supported 00:15:59.226 Deallocated Read Value: Unknown 00:15:59.226 Deallocate in Write Zeroes: Not Supported 00:15:59.226 Deallocated Guard Field: 0xFFFF 00:15:59.226 Flush: Supported 00:15:59.226 Reservation: Supported 00:15:59.226 Namespace Sharing Capabilities: Multiple Controllers 00:15:59.226 Size (in LBAs): 131072 (0GiB) 00:15:59.226 Capacity (in LBAs): 131072 (0GiB) 00:15:59.226 Utilization (in LBAs): 131072 (0GiB) 00:15:59.226 NGUID: 6732BDCEC82E4D2A923E85E4275B5FF1 00:15:59.226 UUID: 6732bdce-c82e-4d2a-923e-85e4275b5ff1 00:15:59.226 Thin Provisioning: Not Supported 00:15:59.226 Per-NS Atomic Units: Yes 00:15:59.226 Atomic Boundary Size (Normal): 0 00:15:59.226 Atomic Boundary Size (PFail): 0 00:15:59.226 Atomic Boundary Offset: 0 00:15:59.226 Maximum Single Source Range Length: 65535 00:15:59.226 Maximum Copy Length: 65535 00:15:59.226 Maximum Source Range Count: 1 00:15:59.226 NGUID/EUI64 Never Reused: No 00:15:59.226 Namespace Write Protected: No 00:15:59.226 Number of LBA Formats: 1 00:15:59.226 Current LBA Format: LBA Format #00 00:15:59.226 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:59.226 00:15:59.227 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:59.484 [2024-11-06 08:52:12.531962] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.746 Initializing NVMe Controllers 00:16:04.746 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:04.747 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:04.747 Initialization complete. Launching workers. 00:16:04.747 ======================================================== 00:16:04.747 Latency(us) 00:16:04.747 Device Information : IOPS MiB/s Average min max 00:16:04.747 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34312.26 134.03 3729.45 1175.81 9853.43 00:16:04.747 ======================================================== 00:16:04.747 Total : 34312.26 134.03 3729.45 1175.81 9853.43 00:16:04.747 00:16:04.747 [2024-11-06 08:52:17.642202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.747 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:04.747 [2024-11-06 08:52:17.907928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.008 Initializing NVMe Controllers 00:16:10.008 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:10.008 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:10.008 Initialization complete. Launching workers. 00:16:10.008 ======================================================== 00:16:10.008 Latency(us) 00:16:10.008 Device Information : IOPS MiB/s Average min max 00:16:10.008 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31290.63 122.23 4089.98 1200.13 8265.06 00:16:10.008 ======================================================== 00:16:10.008 Total : 31290.63 122.23 4089.98 1200.13 8265.06 00:16:10.008 00:16:10.008 [2024-11-06 08:52:22.929867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.008 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:10.008 [2024-11-06 08:52:23.160867] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:15.380 [2024-11-06 08:52:28.296976] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:15.380 Initializing NVMe Controllers 00:16:15.380 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:15.380 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:15.380 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:15.380 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:15.380 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:15.380 Initialization complete. Launching workers. 00:16:15.380 Starting thread on core 2 00:16:15.380 Starting thread on core 3 00:16:15.380 Starting thread on core 1 00:16:15.380 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:15.380 [2024-11-06 08:52:28.629396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.663 [2024-11-06 08:52:31.682653] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.663 Initializing NVMe Controllers 00:16:18.663 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.663 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.663 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:18.663 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:18.663 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:18.663 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:18.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:18.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:18.663 Initialization complete. Launching workers. 00:16:18.663 Starting thread on core 1 with urgent priority queue 00:16:18.663 Starting thread on core 2 with urgent priority queue 00:16:18.663 Starting thread on core 3 with urgent priority queue 00:16:18.663 Starting thread on core 0 with urgent priority queue 00:16:18.663 SPDK bdev Controller (SPDK2 ) core 0: 4399.33 IO/s 22.73 secs/100000 ios 00:16:18.663 SPDK bdev Controller (SPDK2 ) core 1: 5928.00 IO/s 16.87 secs/100000 ios 00:16:18.663 SPDK bdev Controller (SPDK2 ) core 2: 5187.67 IO/s 19.28 secs/100000 ios 00:16:18.663 SPDK bdev Controller (SPDK2 ) core 3: 6555.00 IO/s 15.26 secs/100000 ios 00:16:18.663 ======================================================== 00:16:18.663 00:16:18.663 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:18.920 [2024-11-06 08:52:31.983315] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.920 Initializing NVMe Controllers 00:16:18.920 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.920 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.920 Namespace ID: 1 size: 0GB 00:16:18.920 Initialization complete. 00:16:18.920 INFO: using host memory buffer for IO 00:16:18.920 Hello world! 00:16:18.920 [2024-11-06 08:52:31.993373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.920 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:19.178 [2024-11-06 08:52:32.314251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.551 Initializing NVMe Controllers 00:16:20.551 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.551 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.551 Initialization complete. Launching workers. 00:16:20.551 submit (in ns) avg, min, max = 7599.9, 3552.2, 4017504.4 00:16:20.551 complete (in ns) avg, min, max = 26811.7, 2062.2, 4017717.8 00:16:20.551 00:16:20.551 Submit histogram 00:16:20.551 ================ 00:16:20.551 Range in us Cumulative Count 00:16:20.551 3.532 - 3.556: 0.0228% ( 3) 00:16:20.551 3.556 - 3.579: 1.6640% ( 216) 00:16:20.551 3.579 - 3.603: 7.6514% ( 788) 00:16:20.551 3.603 - 3.627: 20.8723% ( 1740) 00:16:20.551 3.627 - 3.650: 31.5402% ( 1404) 00:16:20.551 3.650 - 3.674: 39.7766% ( 1084) 00:16:20.551 3.674 - 3.698: 46.4554% ( 879) 00:16:20.551 3.698 - 3.721: 52.3896% ( 781) 00:16:20.551 3.721 - 3.745: 57.7236% ( 702) 00:16:20.551 3.745 - 3.769: 61.6519% ( 517) 00:16:20.551 3.769 - 3.793: 64.8279% ( 418) 00:16:20.551 3.793 - 3.816: 67.7000% ( 378) 00:16:20.551 3.816 - 3.840: 71.4383% ( 492) 00:16:20.551 3.840 - 3.864: 76.1872% ( 625) 00:16:20.551 3.864 - 3.887: 80.4650% ( 563) 00:16:20.551 3.887 - 3.911: 84.0970% ( 478) 00:16:20.551 3.911 - 3.935: 86.3840% ( 301) 00:16:20.551 3.935 - 3.959: 88.0860% ( 224) 00:16:20.551 3.959 - 3.982: 89.7500% ( 219) 00:16:20.551 3.982 - 4.006: 91.0189% ( 167) 00:16:20.551 4.006 - 4.030: 91.9763% ( 126) 00:16:20.551 4.030 - 4.053: 92.7285% ( 99) 00:16:20.551 4.053 - 4.077: 93.4883% ( 100) 00:16:20.551 4.077 - 4.101: 94.3849% ( 118) 00:16:20.551 4.101 - 4.124: 94.9472% ( 74) 00:16:20.551 4.124 - 4.148: 95.6082% ( 87) 00:16:20.551 4.148 - 4.172: 95.9881% ( 50) 00:16:20.551 4.172 - 4.196: 96.2769% ( 38) 00:16:20.551 4.196 - 4.219: 96.3984% ( 16) 00:16:20.551 4.219 - 4.243: 96.5428% ( 19) 00:16:20.551 4.243 - 4.267: 96.6720% ( 17) 00:16:20.551 4.267 - 4.290: 96.7480% ( 10) 00:16:20.551 4.290 - 4.314: 96.9303% ( 24) 00:16:20.551 4.314 - 4.338: 97.0367% ( 14) 00:16:20.551 4.338 - 4.361: 97.1355% ( 13) 00:16:20.551 4.361 - 4.385: 97.1811% ( 6) 00:16:20.551 4.385 - 4.409: 97.2419% ( 8) 00:16:20.551 4.409 - 4.433: 97.2874% ( 6) 00:16:20.551 4.433 - 4.456: 97.3330% ( 6) 00:16:20.551 4.456 - 4.480: 97.3482% ( 2) 00:16:20.551 4.480 - 4.504: 97.3558% ( 1) 00:16:20.551 4.527 - 4.551: 97.3634% ( 1) 00:16:20.551 4.551 - 4.575: 97.3710% ( 1) 00:16:20.551 4.575 - 4.599: 97.3786% ( 1) 00:16:20.551 4.622 - 4.646: 97.3938% ( 2) 00:16:20.551 4.646 - 4.670: 97.4014% ( 1) 00:16:20.551 4.670 - 4.693: 97.4090% ( 1) 00:16:20.551 4.741 - 4.764: 97.4166% ( 1) 00:16:20.551 4.764 - 4.788: 97.4394% ( 3) 00:16:20.551 4.788 - 4.812: 97.4850% ( 6) 00:16:20.551 4.812 - 4.836: 97.5078% ( 3) 00:16:20.551 4.836 - 4.859: 97.5762% ( 9) 00:16:20.551 4.859 - 4.883: 97.6370% ( 8) 00:16:20.551 4.883 - 4.907: 97.6522% ( 2) 00:16:20.551 4.907 - 4.930: 97.6901% ( 5) 00:16:20.551 4.930 - 4.954: 97.7205% ( 4) 00:16:20.551 4.954 - 4.978: 97.7737% ( 7) 00:16:20.551 4.978 - 5.001: 97.8573% ( 11) 00:16:20.551 5.001 - 5.025: 97.8953% ( 5) 00:16:20.551 5.025 - 5.049: 97.9181% ( 3) 00:16:20.551 5.049 - 5.073: 97.9561% ( 5) 00:16:20.551 5.073 - 5.096: 98.0017% ( 6) 00:16:20.551 5.096 - 5.120: 98.0397% ( 5) 00:16:20.551 5.120 - 5.144: 98.0701% ( 4) 00:16:20.551 5.144 - 5.167: 98.0929% ( 3) 00:16:20.551 5.167 - 5.191: 98.1156% ( 3) 00:16:20.551 5.191 - 5.215: 98.1232% ( 1) 00:16:20.551 5.215 - 5.239: 98.1460% ( 3) 00:16:20.551 5.239 - 5.262: 98.1612% ( 2) 00:16:20.551 5.262 - 5.286: 98.1688% ( 1) 00:16:20.551 5.286 - 5.310: 98.1764% ( 1) 00:16:20.551 5.310 - 5.333: 98.1840% ( 1) 00:16:20.551 5.333 - 5.357: 98.1916% ( 1) 00:16:20.551 5.357 - 5.381: 98.1992% ( 1) 00:16:20.551 5.452 - 5.476: 98.2068% ( 1) 00:16:20.551 5.641 - 5.665: 98.2144% ( 1) 00:16:20.551 5.665 - 5.689: 98.2220% ( 1) 00:16:20.551 5.689 - 5.713: 98.2372% ( 2) 00:16:20.551 5.760 - 5.784: 98.2448% ( 1) 00:16:20.551 5.784 - 5.807: 98.2524% ( 1) 00:16:20.551 5.902 - 5.926: 98.2676% ( 2) 00:16:20.551 5.973 - 5.997: 98.2752% ( 1) 00:16:20.551 5.997 - 6.021: 98.2828% ( 1) 00:16:20.551 6.044 - 6.068: 98.2904% ( 1) 00:16:20.551 6.068 - 6.116: 98.2980% ( 1) 00:16:20.551 6.163 - 6.210: 98.3056% ( 1) 00:16:20.551 6.210 - 6.258: 98.3132% ( 1) 00:16:20.551 6.258 - 6.305: 98.3284% ( 2) 00:16:20.551 6.637 - 6.684: 98.3360% ( 1) 00:16:20.551 6.874 - 6.921: 98.3588% ( 3) 00:16:20.551 6.921 - 6.969: 98.3664% ( 1) 00:16:20.551 7.301 - 7.348: 98.3740% ( 1) 00:16:20.551 7.585 - 7.633: 98.3816% ( 1) 00:16:20.551 7.633 - 7.680: 98.3892% ( 1) 00:16:20.551 7.727 - 7.775: 98.4120% ( 3) 00:16:20.551 7.870 - 7.917: 98.4196% ( 1) 00:16:20.551 7.917 - 7.964: 98.4272% ( 1) 00:16:20.551 7.964 - 8.012: 98.4348% ( 1) 00:16:20.551 8.154 - 8.201: 98.4500% ( 2) 00:16:20.551 8.201 - 8.249: 98.4576% ( 1) 00:16:20.551 8.296 - 8.344: 98.4652% ( 1) 00:16:20.551 8.391 - 8.439: 98.4804% ( 2) 00:16:20.551 8.486 - 8.533: 98.4880% ( 1) 00:16:20.551 8.533 - 8.581: 98.4956% ( 1) 00:16:20.551 8.676 - 8.723: 98.5032% ( 1) 00:16:20.551 8.723 - 8.770: 98.5108% ( 1) 00:16:20.551 8.770 - 8.818: 98.5183% ( 1) 00:16:20.551 8.865 - 8.913: 98.5335% ( 2) 00:16:20.551 8.913 - 8.960: 98.5487% ( 2) 00:16:20.551 8.960 - 9.007: 98.5563% ( 1) 00:16:20.551 9.007 - 9.055: 98.5639% ( 1) 00:16:20.551 9.055 - 9.102: 98.5791% ( 2) 00:16:20.551 9.150 - 9.197: 98.5943% ( 2) 00:16:20.551 9.244 - 9.292: 98.6019% ( 1) 00:16:20.551 9.292 - 9.339: 98.6323% ( 4) 00:16:20.551 9.339 - 9.387: 98.6475% ( 2) 00:16:20.551 9.387 - 9.434: 98.6551% ( 1) 00:16:20.551 9.434 - 9.481: 98.6779% ( 3) 00:16:20.551 9.576 - 9.624: 98.6855% ( 1) 00:16:20.551 9.766 - 9.813: 98.7007% ( 2) 00:16:20.551 9.908 - 9.956: 98.7083% ( 1) 00:16:20.551 10.193 - 10.240: 98.7159% ( 1) 00:16:20.551 10.240 - 10.287: 98.7235% ( 1) 00:16:20.551 10.382 - 10.430: 98.7311% ( 1) 00:16:20.551 10.430 - 10.477: 98.7463% ( 2) 00:16:20.551 10.524 - 10.572: 98.7539% ( 1) 00:16:20.551 10.572 - 10.619: 98.7615% ( 1) 00:16:20.551 10.619 - 10.667: 98.7691% ( 1) 00:16:20.551 10.714 - 10.761: 98.7767% ( 1) 00:16:20.551 10.761 - 10.809: 98.7843% ( 1) 00:16:20.551 10.904 - 10.951: 98.7995% ( 2) 00:16:20.551 10.951 - 10.999: 98.8071% ( 1) 00:16:20.551 11.283 - 11.330: 98.8223% ( 2) 00:16:20.551 11.425 - 11.473: 98.8299% ( 1) 00:16:20.551 11.473 - 11.520: 98.8375% ( 1) 00:16:20.551 11.852 - 11.899: 98.8451% ( 1) 00:16:20.551 12.089 - 12.136: 98.8527% ( 1) 00:16:20.551 12.326 - 12.421: 98.8603% ( 1) 00:16:20.551 12.990 - 13.084: 98.8679% ( 1) 00:16:20.551 13.274 - 13.369: 98.8755% ( 1) 00:16:20.551 13.464 - 13.559: 98.8831% ( 1) 00:16:20.551 13.559 - 13.653: 98.8983% ( 2) 00:16:20.551 13.653 - 13.748: 98.9059% ( 1) 00:16:20.551 13.748 - 13.843: 98.9135% ( 1) 00:16:20.551 13.843 - 13.938: 98.9211% ( 1) 00:16:20.551 13.938 - 14.033: 98.9363% ( 2) 00:16:20.551 14.507 - 14.601: 98.9438% ( 1) 00:16:20.551 14.601 - 14.696: 98.9590% ( 2) 00:16:20.551 14.696 - 14.791: 98.9666% ( 1) 00:16:20.551 15.929 - 16.024: 98.9742% ( 1) 00:16:20.551 17.067 - 17.161: 98.9818% ( 1) 00:16:20.551 17.351 - 17.446: 98.9970% ( 2) 00:16:20.551 17.446 - 17.541: 99.0350% ( 5) 00:16:20.551 17.541 - 17.636: 99.0502% ( 2) 00:16:20.551 17.636 - 17.730: 99.0578% ( 1) 00:16:20.552 17.730 - 17.825: 99.1186% ( 8) 00:16:20.552 17.825 - 17.920: 99.1794% ( 8) 00:16:20.552 17.920 - 18.015: 99.2554% ( 10) 00:16:20.552 18.015 - 18.110: 99.3238% ( 9) 00:16:20.552 18.110 - 18.204: 99.3997% ( 10) 00:16:20.552 18.204 - 18.299: 99.4757% ( 10) 00:16:20.552 18.299 - 18.394: 99.5669% ( 12) 00:16:20.552 18.394 - 18.489: 99.6429% ( 10) 00:16:20.552 18.489 - 18.584: 99.6809% ( 5) 00:16:20.552 18.584 - 18.679: 99.7265% ( 6) 00:16:20.552 18.679 - 18.773: 99.7341% ( 1) 00:16:20.552 18.773 - 18.868: 99.7797% ( 6) 00:16:20.552 18.868 - 18.963: 99.8024% ( 3) 00:16:20.552 18.963 - 19.058: 99.8176% ( 2) 00:16:20.552 20.290 - 20.385: 99.8252% ( 1) 00:16:20.552 21.428 - 21.523: 99.8328% ( 1) 00:16:20.552 22.471 - 22.566: 99.8404% ( 1) 00:16:20.552 22.756 - 22.850: 99.8480% ( 1) 00:16:20.552 23.040 - 23.135: 99.8556% ( 1) 00:16:20.552 23.419 - 23.514: 99.8632% ( 1) 00:16:20.552 24.841 - 25.031: 99.8708% ( 1) 00:16:20.552 25.031 - 25.221: 99.8784% ( 1) 00:16:20.552 25.221 - 25.410: 99.8860% ( 1) 00:16:20.552 27.117 - 27.307: 99.8936% ( 1) 00:16:20.552 29.013 - 29.203: 99.9012% ( 1) 00:16:20.552 29.772 - 29.961: 99.9088% ( 1) 00:16:20.552 3980.705 - 4004.978: 99.9468% ( 5) 00:16:20.552 4004.978 - 4029.250: 100.0000% ( 7) 00:16:20.552 00:16:20.552 Complete histogram 00:16:20.552 ================== 00:16:20.552 Range in us Cumulative Count 00:16:20.552 2.062 - 2.074: 7.4007% ( 974) 00:16:20.552 2.074 - 2.086: 30.3928% ( 3026) 00:16:20.552 2.086 - 2.098: 32.2544% ( 245) 00:16:20.552 2.098 - 2.110: 48.7881% ( 2176) 00:16:20.552 2.110 - 2.121: 61.6746% ( 1696) 00:16:20.552 2.121 - 2.133: 63.3007% ( 214) 00:16:20.552 2.133 - 2.145: 69.4476% ( 809) 00:16:20.552 2.145 - 2.157: 74.1281% ( 616) 00:16:20.552 2.157 - 2.169: 75.1007% ( 128) 00:16:20.552 2.169 - 2.181: 80.3054% ( 685) 00:16:20.552 2.181 - 2.193: 82.8736% ( 338) 00:16:20.552 2.193 - 2.204: 83.3523% ( 63) 00:16:20.552 2.204 - 2.216: 85.5254% ( 286) 00:16:20.552 2.216 - 2.228: 88.3747% ( 375) 00:16:20.552 2.228 - 2.240: 89.8488% ( 194) 00:16:20.552 2.240 - 2.252: 92.3486% ( 329) 00:16:20.552 2.252 - 2.264: 93.6707% ( 174) 00:16:20.552 2.264 - 2.276: 94.0430% ( 49) 00:16:20.552 2.276 - 2.287: 94.4305% ( 51) 00:16:20.552 2.287 - 2.299: 95.0232% ( 78) 00:16:20.552 2.299 - 2.311: 95.4031% ( 50) 00:16:20.552 2.311 - 2.323: 95.5550% ( 20) 00:16:20.552 2.323 - 2.335: 95.6006% ( 6) 00:16:20.552 2.335 - 2.347: 95.6462% ( 6) 00:16:20.552 2.347 - 2.359: 95.6994% ( 7) 00:16:20.552 2.359 - 2.370: 95.7982% ( 13) 00:16:20.552 2.370 - 2.382: 96.0641% ( 35) 00:16:20.552 2.382 - 2.394: 96.5580% ( 65) 00:16:20.552 2.394 - 2.406: 96.8695% ( 41) 00:16:20.552 2.406 - 2.418: 97.1051% ( 31) 00:16:20.552 2.418 - 2.430: 97.3102% ( 27) 00:16:20.552 2.430 - 2.441: 97.5078% ( 26) 00:16:20.552 2.441 - 2.453: 97.6142% ( 14) 00:16:20.552 2.453 - 2.465: 97.7889% ( 23) 00:16:20.552 2.465 - 2.477: 97.8877% ( 13) 00:16:20.552 2.477 - 2.489: 98.0017% ( 15) 00:16:20.552 2.489 - 2.501: 98.1004% ( 13) 00:16:20.552 2.501 - 2.513: 98.2144% ( 15) 00:16:20.552 2.513 - 2.524: 98.2524% ( 5) 00:16:20.552 2.524 - 2.536: 98.3056% ( 7) 00:16:20.552 2.536 - 2.548: 98.3360% ( 4) 00:16:20.552 2.548 - 2.560: 98.3588% ( 3) 00:16:20.552 2.560 - 2.572: 98.3740% ( 2) 00:16:20.552 2.572 - 2.584: 98.4044% ( 4) 00:16:20.552 2.584 - 2.596: 98.4272% ( 3) 00:16:20.552 2.596 - 2.607: 98.4348% ( 1) 00:16:20.552 2.619 - 2.631: 98.4424% ( 1) 00:16:20.552 2.750 - 2.761: 98.4576% ( 2) 00:16:20.552 3.022 - 3.034: 98.4652% ( 1) 00:16:20.552 3.461 - 3.484: 98.4728% ( 1) 00:16:20.552 3.627 - 3.650: 98.4804% ( 1) 00:16:20.552 3.698 - 3.721: 98.4880% ( 1) 00:16:20.552 3.721 - 3.745: 98.5032% ( 2) 00:16:20.552 3.745 - 3.769: 98.5108% ( 1) 00:16:20.552 3.793 - 3.816: 98.5183% ( 1) 00:16:20.552 3.816 - 3.840: 98.5259% ( 1) 00:16:20.552 3.840 - 3.864: 98.5411% ( 2) 00:16:20.552 3.864 - 3.887: 98.5487% ( 1) 00:16:20.552 3.911 - 3.935: 98.5563% ( 1) 00:16:20.552 4.053 - 4.077: 98.5639% ( 1) 00:16:20.552 4.077 - 4.101: 98.5867% ( 3) 00:16:20.552 4.148 - 4.172: 98.6095% ( 3) 00:16:20.552 4.290 - 4.314: 98.6171% ( 1) 00:16:20.552 5.428 - 5.452: 98.6247% ( 1) 00:16:20.552 5.665 - 5.689: 98.6323% ( 1) 00:16:20.552 5.713 - 5.736: 98.6399% ( 1) 00:16:20.552 5.736 - 5.760: 98.6475% ( 1) 00:16:20.552 5.997 - 6.021: 98.6551% ( 1) 00:16:20.552 6.021 - 6.044: 9[2024-11-06 08:52:33.415650] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.552 8.6703% ( 2) 00:16:20.552 6.400 - 6.447: 98.6855% ( 2) 00:16:20.552 6.542 - 6.590: 98.6931% ( 1) 00:16:20.552 6.827 - 6.874: 98.7235% ( 4) 00:16:20.552 7.064 - 7.111: 98.7387% ( 2) 00:16:20.552 7.159 - 7.206: 98.7539% ( 2) 00:16:20.552 7.206 - 7.253: 98.7615% ( 1) 00:16:20.552 7.396 - 7.443: 98.7691% ( 1) 00:16:20.552 7.443 - 7.490: 98.7767% ( 1) 00:16:20.552 7.585 - 7.633: 98.7843% ( 1) 00:16:20.552 7.727 - 7.775: 98.7919% ( 1) 00:16:20.552 8.012 - 8.059: 98.7995% ( 1) 00:16:20.552 8.391 - 8.439: 98.8071% ( 1) 00:16:20.552 9.434 - 9.481: 98.8147% ( 1) 00:16:20.552 15.360 - 15.455: 98.8299% ( 2) 00:16:20.552 15.550 - 15.644: 98.8375% ( 1) 00:16:20.552 15.644 - 15.739: 98.8907% ( 7) 00:16:20.552 15.834 - 15.929: 98.9211% ( 4) 00:16:20.552 15.929 - 16.024: 98.9514% ( 4) 00:16:20.552 16.024 - 16.119: 98.9818% ( 4) 00:16:20.552 16.119 - 16.213: 99.0122% ( 4) 00:16:20.552 16.213 - 16.308: 99.0654% ( 7) 00:16:20.552 16.308 - 16.403: 99.0730% ( 1) 00:16:20.552 16.403 - 16.498: 99.1262% ( 7) 00:16:20.552 16.498 - 16.593: 99.1718% ( 6) 00:16:20.552 16.593 - 16.687: 99.1794% ( 1) 00:16:20.552 16.687 - 16.782: 99.2174% ( 5) 00:16:20.552 16.782 - 16.877: 99.2250% ( 1) 00:16:20.552 16.877 - 16.972: 99.2478% ( 3) 00:16:20.552 16.972 - 17.067: 99.2554% ( 1) 00:16:20.552 17.161 - 17.256: 99.2706% ( 2) 00:16:20.552 17.256 - 17.351: 99.2858% ( 2) 00:16:20.552 17.351 - 17.446: 99.2934% ( 1) 00:16:20.552 17.541 - 17.636: 99.3010% ( 1) 00:16:20.552 17.636 - 17.730: 99.3086% ( 1) 00:16:20.552 17.920 - 18.015: 99.3314% ( 3) 00:16:20.552 18.204 - 18.299: 99.3390% ( 1) 00:16:20.552 18.299 - 18.394: 99.3466% ( 1) 00:16:20.552 18.394 - 18.489: 99.3618% ( 2) 00:16:20.552 18.679 - 18.773: 99.3693% ( 1) 00:16:20.552 18.963 - 19.058: 99.3769% ( 1) 00:16:20.552 19.058 - 19.153: 99.3845% ( 1) 00:16:20.552 3131.164 - 3155.437: 99.3921% ( 1) 00:16:20.552 3980.705 - 4004.978: 99.7417% ( 46) 00:16:20.552 4004.978 - 4029.250: 100.0000% ( 34) 00:16:20.552 00:16:20.552 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:20.552 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:20.552 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:20.552 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:20.552 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:20.552 [ 00:16:20.552 { 00:16:20.552 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:20.552 "subtype": "Discovery", 00:16:20.552 "listen_addresses": [], 00:16:20.552 "allow_any_host": true, 00:16:20.552 "hosts": [] 00:16:20.552 }, 00:16:20.552 { 00:16:20.552 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:20.552 "subtype": "NVMe", 00:16:20.552 "listen_addresses": [ 00:16:20.552 { 00:16:20.552 "trtype": "VFIOUSER", 00:16:20.552 "adrfam": "IPv4", 00:16:20.552 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:20.552 "trsvcid": "0" 00:16:20.552 } 00:16:20.552 ], 00:16:20.552 "allow_any_host": true, 00:16:20.552 "hosts": [], 00:16:20.552 "serial_number": "SPDK1", 00:16:20.552 "model_number": "SPDK bdev Controller", 00:16:20.552 "max_namespaces": 32, 00:16:20.552 "min_cntlid": 1, 00:16:20.552 "max_cntlid": 65519, 00:16:20.552 "namespaces": [ 00:16:20.552 { 00:16:20.552 "nsid": 1, 00:16:20.552 "bdev_name": "Malloc1", 00:16:20.552 "name": "Malloc1", 00:16:20.552 "nguid": "C066DC4F8866495A878B5255C0C6EBA6", 00:16:20.552 "uuid": "c066dc4f-8866-495a-878b-5255c0c6eba6" 00:16:20.552 }, 00:16:20.552 { 00:16:20.552 "nsid": 2, 00:16:20.552 "bdev_name": "Malloc3", 00:16:20.552 "name": "Malloc3", 00:16:20.552 "nguid": "CB35DE5B04F9482280CCBF07063C1D21", 00:16:20.552 "uuid": "cb35de5b-04f9-4822-80cc-bf07063c1d21" 00:16:20.552 } 00:16:20.552 ] 00:16:20.552 }, 00:16:20.552 { 00:16:20.553 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:20.553 "subtype": "NVMe", 00:16:20.553 "listen_addresses": [ 00:16:20.553 { 00:16:20.553 "trtype": "VFIOUSER", 00:16:20.553 "adrfam": "IPv4", 00:16:20.553 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:20.553 "trsvcid": "0" 00:16:20.553 } 00:16:20.553 ], 00:16:20.553 "allow_any_host": true, 00:16:20.553 "hosts": [], 00:16:20.553 "serial_number": "SPDK2", 00:16:20.553 "model_number": "SPDK bdev Controller", 00:16:20.553 "max_namespaces": 32, 00:16:20.553 "min_cntlid": 1, 00:16:20.553 "max_cntlid": 65519, 00:16:20.553 "namespaces": [ 00:16:20.553 { 00:16:20.553 "nsid": 1, 00:16:20.553 "bdev_name": "Malloc2", 00:16:20.553 "name": "Malloc2", 00:16:20.553 "nguid": "6732BDCEC82E4D2A923E85E4275B5FF1", 00:16:20.553 "uuid": "6732bdce-c82e-4d2a-923e-85e4275b5ff1" 00:16:20.553 } 00:16:20.553 ] 00:16:20.553 } 00:16:20.553 ] 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=807699 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:20.553 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:20.811 [2024-11-06 08:52:33.957327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.811 Malloc4 00:16:20.811 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:21.376 [2024-11-06 08:52:34.382358] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:21.376 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:21.376 Asynchronous Event Request test 00:16:21.376 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.376 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:21.376 Registering asynchronous event callbacks... 00:16:21.376 Starting namespace attribute notice tests for all controllers... 00:16:21.376 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:21.376 aer_cb - Changed Namespace 00:16:21.376 Cleaning up... 00:16:21.376 [ 00:16:21.376 { 00:16:21.376 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:21.376 "subtype": "Discovery", 00:16:21.376 "listen_addresses": [], 00:16:21.376 "allow_any_host": true, 00:16:21.376 "hosts": [] 00:16:21.376 }, 00:16:21.376 { 00:16:21.376 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:21.376 "subtype": "NVMe", 00:16:21.376 "listen_addresses": [ 00:16:21.376 { 00:16:21.376 "trtype": "VFIOUSER", 00:16:21.376 "adrfam": "IPv4", 00:16:21.376 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:21.376 "trsvcid": "0" 00:16:21.376 } 00:16:21.376 ], 00:16:21.376 "allow_any_host": true, 00:16:21.376 "hosts": [], 00:16:21.376 "serial_number": "SPDK1", 00:16:21.376 "model_number": "SPDK bdev Controller", 00:16:21.376 "max_namespaces": 32, 00:16:21.376 "min_cntlid": 1, 00:16:21.376 "max_cntlid": 65519, 00:16:21.376 "namespaces": [ 00:16:21.376 { 00:16:21.376 "nsid": 1, 00:16:21.376 "bdev_name": "Malloc1", 00:16:21.376 "name": "Malloc1", 00:16:21.376 "nguid": "C066DC4F8866495A878B5255C0C6EBA6", 00:16:21.376 "uuid": "c066dc4f-8866-495a-878b-5255c0c6eba6" 00:16:21.376 }, 00:16:21.376 { 00:16:21.376 "nsid": 2, 00:16:21.376 "bdev_name": "Malloc3", 00:16:21.376 "name": "Malloc3", 00:16:21.376 "nguid": "CB35DE5B04F9482280CCBF07063C1D21", 00:16:21.376 "uuid": "cb35de5b-04f9-4822-80cc-bf07063c1d21" 00:16:21.376 } 00:16:21.376 ] 00:16:21.376 }, 00:16:21.376 { 00:16:21.376 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:21.376 "subtype": "NVMe", 00:16:21.376 "listen_addresses": [ 00:16:21.376 { 00:16:21.376 "trtype": "VFIOUSER", 00:16:21.376 "adrfam": "IPv4", 00:16:21.376 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:21.376 "trsvcid": "0" 00:16:21.376 } 00:16:21.376 ], 00:16:21.376 "allow_any_host": true, 00:16:21.376 "hosts": [], 00:16:21.376 "serial_number": "SPDK2", 00:16:21.376 "model_number": "SPDK bdev Controller", 00:16:21.376 "max_namespaces": 32, 00:16:21.376 "min_cntlid": 1, 00:16:21.376 "max_cntlid": 65519, 00:16:21.376 "namespaces": [ 00:16:21.376 { 00:16:21.376 "nsid": 1, 00:16:21.376 "bdev_name": "Malloc2", 00:16:21.376 "name": "Malloc2", 00:16:21.376 "nguid": "6732BDCEC82E4D2A923E85E4275B5FF1", 00:16:21.376 "uuid": "6732bdce-c82e-4d2a-923e-85e4275b5ff1" 00:16:21.376 }, 00:16:21.376 { 00:16:21.376 "nsid": 2, 00:16:21.376 "bdev_name": "Malloc4", 00:16:21.376 "name": "Malloc4", 00:16:21.376 "nguid": "6F106C262EDB4035BA1CCEA73FD40C0F", 00:16:21.376 "uuid": "6f106c26-2edb-4035-ba1c-cea73fd40c0f" 00:16:21.376 } 00:16:21.376 ] 00:16:21.376 } 00:16:21.376 ] 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 807699 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 801973 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 801973 ']' 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 801973 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 801973 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 801973' 00:16:21.634 killing process with pid 801973 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 801973 00:16:21.634 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 801973 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=807842 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 807842' 00:16:21.892 Process pid: 807842 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 807842 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 807842 ']' 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.892 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:21.892 [2024-11-06 08:52:35.081438] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:21.892 [2024-11-06 08:52:35.082398] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:16:21.892 [2024-11-06 08:52:35.082455] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.892 [2024-11-06 08:52:35.148847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.152 [2024-11-06 08:52:35.209418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.152 [2024-11-06 08:52:35.209473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.152 [2024-11-06 08:52:35.209486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.152 [2024-11-06 08:52:35.209497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.152 [2024-11-06 08:52:35.209506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.152 [2024-11-06 08:52:35.211009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.152 [2024-11-06 08:52:35.211038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.152 [2024-11-06 08:52:35.211095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.152 [2024-11-06 08:52:35.211098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.152 [2024-11-06 08:52:35.312091] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:22.152 [2024-11-06 08:52:35.312199] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:22.152 [2024-11-06 08:52:35.312510] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:22.152 [2024-11-06 08:52:35.313151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:22.152 [2024-11-06 08:52:35.313377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:22.152 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.152 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:22.152 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:23.087 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:23.345 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:23.345 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:23.345 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:23.345 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:23.345 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:23.913 Malloc1 00:16:23.913 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:24.171 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:24.428 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:24.686 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:24.686 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:24.686 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:24.944 Malloc2 00:16:24.945 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:25.202 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:25.460 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 807842 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 807842 ']' 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 807842 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 807842 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 807842' 00:16:25.718 killing process with pid 807842 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 807842 00:16:25.718 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 807842 00:16:25.976 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:25.976 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:25.976 00:16:25.976 real 0m53.789s 00:16:25.976 user 3m28.311s 00:16:25.976 sys 0m3.957s 00:16:25.976 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:25.976 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:25.976 ************************************ 00:16:25.976 END TEST nvmf_vfio_user 00:16:25.976 ************************************ 00:16:25.976 08:52:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:25.976 08:52:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:25.976 08:52:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:25.976 08:52:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.235 ************************************ 00:16:26.235 START TEST nvmf_vfio_user_nvme_compliance 00:16:26.235 ************************************ 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:26.235 * Looking for test storage... 00:16:26.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1689 -- # lcov --version 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:26.235 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.236 --rc genhtml_branch_coverage=1 00:16:26.236 --rc genhtml_function_coverage=1 00:16:26.236 --rc genhtml_legend=1 00:16:26.236 --rc geninfo_all_blocks=1 00:16:26.236 --rc geninfo_unexecuted_blocks=1 00:16:26.236 00:16:26.236 ' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.236 --rc genhtml_branch_coverage=1 00:16:26.236 --rc genhtml_function_coverage=1 00:16:26.236 --rc genhtml_legend=1 00:16:26.236 --rc geninfo_all_blocks=1 00:16:26.236 --rc geninfo_unexecuted_blocks=1 00:16:26.236 00:16:26.236 ' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.236 --rc genhtml_branch_coverage=1 00:16:26.236 --rc genhtml_function_coverage=1 00:16:26.236 --rc genhtml_legend=1 00:16:26.236 --rc geninfo_all_blocks=1 00:16:26.236 --rc geninfo_unexecuted_blocks=1 00:16:26.236 00:16:26.236 ' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:26.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.236 --rc genhtml_branch_coverage=1 00:16:26.236 --rc genhtml_function_coverage=1 00:16:26.236 --rc genhtml_legend=1 00:16:26.236 --rc geninfo_all_blocks=1 00:16:26.236 --rc geninfo_unexecuted_blocks=1 00:16:26.236 00:16:26.236 ' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=808443 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 808443' 00:16:26.236 Process pid: 808443 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 808443 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 808443 ']' 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:26.236 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.237 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:26.237 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:26.237 [2024-11-06 08:52:39.509906] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:16:26.237 [2024-11-06 08:52:39.509986] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.501 [2024-11-06 08:52:39.577011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:26.501 [2024-11-06 08:52:39.632856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.501 [2024-11-06 08:52:39.632924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.501 [2024-11-06 08:52:39.632939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.501 [2024-11-06 08:52:39.632950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.501 [2024-11-06 08:52:39.632959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.501 [2024-11-06 08:52:39.634395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.501 [2024-11-06 08:52:39.634456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.501 [2024-11-06 08:52:39.634460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.501 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:26.501 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:16:26.501 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.875 malloc0 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.875 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:27.876 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.876 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.876 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.876 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:27.876 00:16:27.876 00:16:27.876 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.876 http://cunit.sourceforge.net/ 00:16:27.876 00:16:27.876 00:16:27.876 Suite: nvme_compliance 00:16:27.876 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-06 08:52:41.015335] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.876 [2024-11-06 08:52:41.020274] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:27.876 [2024-11-06 08:52:41.020299] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:27.876 [2024-11-06 08:52:41.020310] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:27.876 [2024-11-06 08:52:41.022375] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.876 passed 00:16:27.876 Test: admin_identify_ctrlr_verify_fused ...[2024-11-06 08:52:41.107927] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.876 [2024-11-06 08:52:41.110952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:27.876 passed 00:16:28.133 Test: admin_identify_ns ...[2024-11-06 08:52:41.197641] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.133 [2024-11-06 08:52:41.256847] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:28.133 [2024-11-06 08:52:41.264849] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:28.133 [2024-11-06 08:52:41.285969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.133 passed 00:16:28.133 Test: admin_get_features_mandatory_features ...[2024-11-06 08:52:41.369653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.133 [2024-11-06 08:52:41.372671] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.133 passed 00:16:28.391 Test: admin_get_features_optional_features ...[2024-11-06 08:52:41.457254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.391 [2024-11-06 08:52:41.460276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.391 passed 00:16:28.391 Test: admin_set_features_number_of_queues ...[2024-11-06 08:52:41.542358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.391 [2024-11-06 08:52:41.647940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.391 passed 00:16:28.648 Test: admin_get_log_page_mandatory_logs ...[2024-11-06 08:52:41.729654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.648 [2024-11-06 08:52:41.732685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.648 passed 00:16:28.648 Test: admin_get_log_page_with_lpo ...[2024-11-06 08:52:41.814871] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.648 [2024-11-06 08:52:41.883852] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:28.648 [2024-11-06 08:52:41.896950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.648 passed 00:16:28.906 Test: fabric_property_get ...[2024-11-06 08:52:41.981058] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.906 [2024-11-06 08:52:41.982351] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:28.906 [2024-11-06 08:52:41.984075] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.906 passed 00:16:28.906 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-06 08:52:42.066611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.906 [2024-11-06 08:52:42.067954] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:28.906 [2024-11-06 08:52:42.069628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.906 passed 00:16:28.906 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-06 08:52:42.153853] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.164 [2024-11-06 08:52:42.239854] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:29.164 [2024-11-06 08:52:42.255845] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:29.164 [2024-11-06 08:52:42.260937] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.164 passed 00:16:29.164 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-06 08:52:42.343607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.164 [2024-11-06 08:52:42.344956] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:29.164 [2024-11-06 08:52:42.346631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.164 passed 00:16:29.164 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-06 08:52:42.430867] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.422 [2024-11-06 08:52:42.504845] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:29.422 [2024-11-06 08:52:42.528840] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:29.422 [2024-11-06 08:52:42.533950] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.422 passed 00:16:29.422 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-06 08:52:42.620151] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.422 [2024-11-06 08:52:42.621470] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:29.422 [2024-11-06 08:52:42.621509] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:29.422 [2024-11-06 08:52:42.623164] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.422 passed 00:16:29.422 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-06 08:52:42.706531] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.680 [2024-11-06 08:52:42.797858] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:29.680 [2024-11-06 08:52:42.805842] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:29.680 [2024-11-06 08:52:42.813869] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:29.680 [2024-11-06 08:52:42.821842] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:29.680 [2024-11-06 08:52:42.850968] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.680 passed 00:16:29.680 Test: admin_create_io_sq_verify_pc ...[2024-11-06 08:52:42.935577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.680 [2024-11-06 08:52:42.958854] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:29.938 [2024-11-06 08:52:42.976001] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.938 passed 00:16:29.938 Test: admin_create_io_qp_max_qps ...[2024-11-06 08:52:43.057553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:30.871 [2024-11-06 08:52:44.149852] nvme_ctrlr.c:5487:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:31.437 [2024-11-06 08:52:44.522682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.437 passed 00:16:31.437 Test: admin_create_io_sq_shared_cq ...[2024-11-06 08:52:44.605393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.694 [2024-11-06 08:52:44.736838] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:31.694 [2024-11-06 08:52:44.773924] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.694 passed 00:16:31.694 00:16:31.694 Run Summary: Type Total Ran Passed Failed Inactive 00:16:31.694 suites 1 1 n/a 0 0 00:16:31.694 tests 18 18 18 0 0 00:16:31.694 asserts 360 360 360 0 n/a 00:16:31.694 00:16:31.694 Elapsed time = 1.555 seconds 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 808443 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 808443 ']' 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 808443 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 808443 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 808443' 00:16:31.694 killing process with pid 808443 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 808443 00:16:31.694 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 808443 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:31.952 00:16:31.952 real 0m5.825s 00:16:31.952 user 0m16.344s 00:16:31.952 sys 0m0.562s 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 ************************************ 00:16:31.952 END TEST nvmf_vfio_user_nvme_compliance 00:16:31.952 ************************************ 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.952 ************************************ 00:16:31.952 START TEST nvmf_vfio_user_fuzz 00:16:31.952 ************************************ 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:31.952 * Looking for test storage... 00:16:31.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1689 -- # lcov --version 00:16:31.952 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:16:32.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.212 --rc genhtml_branch_coverage=1 00:16:32.212 --rc genhtml_function_coverage=1 00:16:32.212 --rc genhtml_legend=1 00:16:32.212 --rc geninfo_all_blocks=1 00:16:32.212 --rc geninfo_unexecuted_blocks=1 00:16:32.212 00:16:32.212 ' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:16:32.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.212 --rc genhtml_branch_coverage=1 00:16:32.212 --rc genhtml_function_coverage=1 00:16:32.212 --rc genhtml_legend=1 00:16:32.212 --rc geninfo_all_blocks=1 00:16:32.212 --rc geninfo_unexecuted_blocks=1 00:16:32.212 00:16:32.212 ' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:16:32.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.212 --rc genhtml_branch_coverage=1 00:16:32.212 --rc genhtml_function_coverage=1 00:16:32.212 --rc genhtml_legend=1 00:16:32.212 --rc geninfo_all_blocks=1 00:16:32.212 --rc geninfo_unexecuted_blocks=1 00:16:32.212 00:16:32.212 ' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:16:32.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.212 --rc genhtml_branch_coverage=1 00:16:32.212 --rc genhtml_function_coverage=1 00:16:32.212 --rc genhtml_legend=1 00:16:32.212 --rc geninfo_all_blocks=1 00:16:32.212 --rc geninfo_unexecuted_blocks=1 00:16:32.212 00:16:32.212 ' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:32.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:32.212 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=809178 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 809178' 00:16:32.213 Process pid: 809178 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 809178 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 809178 ']' 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:32.213 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:32.471 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:32.471 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:32.471 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.405 malloc0 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:33.405 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:05.469 Fuzzing completed. Shutting down the fuzz application 00:17:05.469 00:17:05.469 Dumping successful admin opcodes: 00:17:05.469 8, 9, 10, 24, 00:17:05.469 Dumping successful io opcodes: 00:17:05.469 0, 00:17:05.469 NS: 0x20000081ef00 I/O qp, Total commands completed: 671731, total successful commands: 2617, random_seed: 2907164928 00:17:05.469 NS: 0x20000081ef00 admin qp, Total commands completed: 86682, total successful commands: 692, random_seed: 1415795328 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 809178 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 809178 ']' 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 809178 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 809178 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 809178' 00:17:05.469 killing process with pid 809178 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 809178 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 809178 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:05.469 00:17:05.469 real 0m32.238s 00:17:05.469 user 0m33.621s 00:17:05.469 sys 0m25.491s 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.469 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:05.469 ************************************ 00:17:05.469 END TEST nvmf_vfio_user_fuzz 00:17:05.469 ************************************ 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:05.470 ************************************ 00:17:05.470 START TEST nvmf_auth_target 00:17:05.470 ************************************ 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:05.470 * Looking for test storage... 00:17:05.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # lcov --version 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:17:05.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.470 --rc genhtml_branch_coverage=1 00:17:05.470 --rc genhtml_function_coverage=1 00:17:05.470 --rc genhtml_legend=1 00:17:05.470 --rc geninfo_all_blocks=1 00:17:05.470 --rc geninfo_unexecuted_blocks=1 00:17:05.470 00:17:05.470 ' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:17:05.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.470 --rc genhtml_branch_coverage=1 00:17:05.470 --rc genhtml_function_coverage=1 00:17:05.470 --rc genhtml_legend=1 00:17:05.470 --rc geninfo_all_blocks=1 00:17:05.470 --rc geninfo_unexecuted_blocks=1 00:17:05.470 00:17:05.470 ' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:17:05.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.470 --rc genhtml_branch_coverage=1 00:17:05.470 --rc genhtml_function_coverage=1 00:17:05.470 --rc genhtml_legend=1 00:17:05.470 --rc geninfo_all_blocks=1 00:17:05.470 --rc geninfo_unexecuted_blocks=1 00:17:05.470 00:17:05.470 ' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:17:05.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.470 --rc genhtml_branch_coverage=1 00:17:05.470 --rc genhtml_function_coverage=1 00:17:05.470 --rc genhtml_legend=1 00:17:05.470 --rc geninfo_all_blocks=1 00:17:05.470 --rc geninfo_unexecuted_blocks=1 00:17:05.470 00:17:05.470 ' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.470 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:05.471 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:06.848 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:06.848 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:06.848 Found net devices under 0000:09:00.0: cvl_0_0 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:06.848 Found net devices under 0000:09:00.1: cvl_0_1 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:06.848 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:06.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:17:06.849 00:17:06.849 --- 10.0.0.2 ping statistics --- 00:17:06.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.849 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:17:06.849 00:17:06.849 --- 10.0.0.1 ping statistics --- 00:17:06.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.849 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=814538 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 814538 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 814538 ']' 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.849 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=814673 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=3131aeb3e5493332b2eaa69dbf8007369edeab5df3d1c0bc 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.v0j 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 3131aeb3e5493332b2eaa69dbf8007369edeab5df3d1c0bc 0 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 3131aeb3e5493332b2eaa69dbf8007369edeab5df3d1c0bc 0 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=3131aeb3e5493332b2eaa69dbf8007369edeab5df3d1c0bc 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.v0j 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.v0j 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.v0j 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=b750d94210ab09d674f83dca3ba17fced517b8b1d8a5acdd3d4ec0b12f90c5a5 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.YGe 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key b750d94210ab09d674f83dca3ba17fced517b8b1d8a5acdd3d4ec0b12f90c5a5 3 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 b750d94210ab09d674f83dca3ba17fced517b8b1d8a5acdd3d4ec0b12f90c5a5 3 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=b750d94210ab09d674f83dca3ba17fced517b8b1d8a5acdd3d4ec0b12f90c5a5 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.YGe 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.YGe 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.YGe 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:17:07.107 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5f3a4b89ea06f5630f066f038a8d3b75 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.lfE 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5f3a4b89ea06f5630f066f038a8d3b75 1 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5f3a4b89ea06f5630f066f038a8d3b75 1 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5f3a4b89ea06f5630f066f038a8d3b75 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:17:07.108 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.lfE 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.lfE 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.lfE 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=600b07bc77bbd67af7261ae210c8b8854a08a9a32b9ab683 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.SxW 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 600b07bc77bbd67af7261ae210c8b8854a08a9a32b9ab683 2 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 600b07bc77bbd67af7261ae210c8b8854a08a9a32b9ab683 2 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=600b07bc77bbd67af7261ae210c8b8854a08a9a32b9ab683 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.SxW 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.SxW 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.SxW 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=e5da82f50c3f3490fea6632f1a2e50d02306c1f0b608cdb5 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.LD9 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key e5da82f50c3f3490fea6632f1a2e50d02306c1f0b608cdb5 2 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 e5da82f50c3f3490fea6632f1a2e50d02306c1f0b608cdb5 2 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=e5da82f50c3f3490fea6632f1a2e50d02306c1f0b608cdb5 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.LD9 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.LD9 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.LD9 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=5ac9f3f421b5807273cbe4914185792b 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.EsH 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 5ac9f3f421b5807273cbe4914185792b 1 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 5ac9f3f421b5807273cbe4914185792b 1 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=5ac9f3f421b5807273cbe4914185792b 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.EsH 00:17:07.367 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.EsH 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.EsH 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d606d1e54e6141d6a175edc84613f0d35aa0b93a8a4072e50ae4dc8488c56565 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.XCz 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d606d1e54e6141d6a175edc84613f0d35aa0b93a8a4072e50ae4dc8488c56565 3 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d606d1e54e6141d6a175edc84613f0d35aa0b93a8a4072e50ae4dc8488c56565 3 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d606d1e54e6141d6a175edc84613f0d35aa0b93a8a4072e50ae4dc8488c56565 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.XCz 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.XCz 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.XCz 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 814538 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 814538 ']' 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:07.368 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.626 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.626 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:07.626 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 814673 /var/tmp/host.sock 00:17:07.626 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 814673 ']' 00:17:07.626 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:07.626 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:07.626 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:07.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:07.626 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:07.626 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.884 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.884 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:07.884 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:07.884 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.884 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.141 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.141 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:08.141 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.v0j 00:17:08.141 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.141 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.141 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.141 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.v0j 00:17:08.141 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.v0j 00:17:08.399 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.YGe ]] 00:17:08.399 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YGe 00:17:08.399 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.399 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.399 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.399 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YGe 00:17:08.399 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YGe 00:17:08.656 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:08.656 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.lfE 00:17:08.656 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.656 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.656 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.656 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.lfE 00:17:08.656 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.lfE 00:17:08.914 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.SxW ]] 00:17:08.914 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SxW 00:17:08.914 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.914 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.914 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.914 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SxW 00:17:08.914 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SxW 00:17:09.172 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:09.172 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LD9 00:17:09.172 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.172 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.172 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.172 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.LD9 00:17:09.172 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.LD9 00:17:09.431 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.EsH ]] 00:17:09.431 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EsH 00:17:09.431 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.431 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.431 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.431 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EsH 00:17:09.431 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EsH 00:17:09.689 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:09.689 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.XCz 00:17:09.689 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.689 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.689 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.689 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.XCz 00:17:09.689 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.XCz 00:17:09.946 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:09.946 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:09.946 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.946 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.946 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:09.946 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.205 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.463 00:17:10.721 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.721 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.721 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.979 { 00:17:10.979 "cntlid": 1, 00:17:10.979 "qid": 0, 00:17:10.979 "state": "enabled", 00:17:10.979 "thread": "nvmf_tgt_poll_group_000", 00:17:10.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:10.979 "listen_address": { 00:17:10.979 "trtype": "TCP", 00:17:10.979 "adrfam": "IPv4", 00:17:10.979 "traddr": "10.0.0.2", 00:17:10.979 "trsvcid": "4420" 00:17:10.979 }, 00:17:10.979 "peer_address": { 00:17:10.979 "trtype": "TCP", 00:17:10.979 "adrfam": "IPv4", 00:17:10.979 "traddr": "10.0.0.1", 00:17:10.979 "trsvcid": "56050" 00:17:10.979 }, 00:17:10.979 "auth": { 00:17:10.979 "state": "completed", 00:17:10.979 "digest": "sha256", 00:17:10.979 "dhgroup": "null" 00:17:10.979 } 00:17:10.979 } 00:17:10.979 ]' 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.979 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.237 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:11.237 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:12.170 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.170 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:12.170 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.170 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.170 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.170 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.170 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:12.170 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.428 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.686 00:17:12.686 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.686 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.686 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.970 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.970 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.970 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.970 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.970 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.970 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.970 { 00:17:12.970 "cntlid": 3, 00:17:12.970 "qid": 0, 00:17:12.970 "state": "enabled", 00:17:12.970 "thread": "nvmf_tgt_poll_group_000", 00:17:12.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:12.970 "listen_address": { 00:17:12.970 "trtype": "TCP", 00:17:12.970 "adrfam": "IPv4", 00:17:12.970 "traddr": "10.0.0.2", 00:17:12.970 "trsvcid": "4420" 00:17:12.970 }, 00:17:12.970 "peer_address": { 00:17:12.970 "trtype": "TCP", 00:17:12.970 "adrfam": "IPv4", 00:17:12.970 "traddr": "10.0.0.1", 00:17:12.970 "trsvcid": "51590" 00:17:12.970 }, 00:17:12.970 "auth": { 00:17:12.970 "state": "completed", 00:17:12.970 "digest": "sha256", 00:17:12.970 "dhgroup": "null" 00:17:12.970 } 00:17:12.970 } 00:17:12.970 ]' 00:17:12.970 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.246 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.246 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.246 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:13.246 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.246 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.246 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.246 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.503 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:13.503 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:14.437 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.437 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.437 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.437 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.437 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.437 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.437 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.437 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.696 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.953 00:17:14.953 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.953 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.953 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.211 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.211 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.211 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.212 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.212 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.212 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.212 { 00:17:15.212 "cntlid": 5, 00:17:15.212 "qid": 0, 00:17:15.212 "state": "enabled", 00:17:15.212 "thread": "nvmf_tgt_poll_group_000", 00:17:15.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:15.212 "listen_address": { 00:17:15.212 "trtype": "TCP", 00:17:15.212 "adrfam": "IPv4", 00:17:15.212 "traddr": "10.0.0.2", 00:17:15.212 "trsvcid": "4420" 00:17:15.212 }, 00:17:15.212 "peer_address": { 00:17:15.212 "trtype": "TCP", 00:17:15.212 "adrfam": "IPv4", 00:17:15.212 "traddr": "10.0.0.1", 00:17:15.212 "trsvcid": "51612" 00:17:15.212 }, 00:17:15.212 "auth": { 00:17:15.212 "state": "completed", 00:17:15.212 "digest": "sha256", 00:17:15.212 "dhgroup": "null" 00:17:15.212 } 00:17:15.212 } 00:17:15.212 ]' 00:17:15.212 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.212 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.212 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.469 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:15.469 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.470 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.470 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.470 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.728 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:15.728 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:16.660 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.660 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:16.660 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.660 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.660 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.660 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.660 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.660 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.919 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.177 00:17:17.177 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.177 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.177 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.434 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.434 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.434 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.434 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.434 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.434 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.435 { 00:17:17.435 "cntlid": 7, 00:17:17.435 "qid": 0, 00:17:17.435 "state": "enabled", 00:17:17.435 "thread": "nvmf_tgt_poll_group_000", 00:17:17.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:17.435 "listen_address": { 00:17:17.435 "trtype": "TCP", 00:17:17.435 "adrfam": "IPv4", 00:17:17.435 "traddr": "10.0.0.2", 00:17:17.435 "trsvcid": "4420" 00:17:17.435 }, 00:17:17.435 "peer_address": { 00:17:17.435 "trtype": "TCP", 00:17:17.435 "adrfam": "IPv4", 00:17:17.435 "traddr": "10.0.0.1", 00:17:17.435 "trsvcid": "51658" 00:17:17.435 }, 00:17:17.435 "auth": { 00:17:17.435 "state": "completed", 00:17:17.435 "digest": "sha256", 00:17:17.435 "dhgroup": "null" 00:17:17.435 } 00:17:17.435 } 00:17:17.435 ]' 00:17:17.435 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.435 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.435 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.692 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:17.692 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.692 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.692 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.692 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.950 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:17.950 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:18.883 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.883 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.883 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.883 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.883 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.883 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.883 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.883 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:18.883 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.141 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.142 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.142 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.399 00:17:19.399 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.399 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.399 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.657 { 00:17:19.657 "cntlid": 9, 00:17:19.657 "qid": 0, 00:17:19.657 "state": "enabled", 00:17:19.657 "thread": "nvmf_tgt_poll_group_000", 00:17:19.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:19.657 "listen_address": { 00:17:19.657 "trtype": "TCP", 00:17:19.657 "adrfam": "IPv4", 00:17:19.657 "traddr": "10.0.0.2", 00:17:19.657 "trsvcid": "4420" 00:17:19.657 }, 00:17:19.657 "peer_address": { 00:17:19.657 "trtype": "TCP", 00:17:19.657 "adrfam": "IPv4", 00:17:19.657 "traddr": "10.0.0.1", 00:17:19.657 "trsvcid": "51674" 00:17:19.657 }, 00:17:19.657 "auth": { 00:17:19.657 "state": "completed", 00:17:19.657 "digest": "sha256", 00:17:19.657 "dhgroup": "ffdhe2048" 00:17:19.657 } 00:17:19.657 } 00:17:19.657 ]' 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.657 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.914 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.914 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.914 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.172 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:20.172 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:21.108 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.108 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:21.108 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.108 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.108 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.108 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.108 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.108 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.366 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.624 00:17:21.624 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.624 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.624 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.882 { 00:17:21.882 "cntlid": 11, 00:17:21.882 "qid": 0, 00:17:21.882 "state": "enabled", 00:17:21.882 "thread": "nvmf_tgt_poll_group_000", 00:17:21.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:21.882 "listen_address": { 00:17:21.882 "trtype": "TCP", 00:17:21.882 "adrfam": "IPv4", 00:17:21.882 "traddr": "10.0.0.2", 00:17:21.882 "trsvcid": "4420" 00:17:21.882 }, 00:17:21.882 "peer_address": { 00:17:21.882 "trtype": "TCP", 00:17:21.882 "adrfam": "IPv4", 00:17:21.882 "traddr": "10.0.0.1", 00:17:21.882 "trsvcid": "51706" 00:17:21.882 }, 00:17:21.882 "auth": { 00:17:21.882 "state": "completed", 00:17:21.882 "digest": "sha256", 00:17:21.882 "dhgroup": "ffdhe2048" 00:17:21.882 } 00:17:21.882 } 00:17:21.882 ]' 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.882 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.448 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:22.448 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:23.013 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.014 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:23.014 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.014 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.014 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.014 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.014 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.014 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.579 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.845 00:17:23.845 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.845 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.845 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.104 { 00:17:24.104 "cntlid": 13, 00:17:24.104 "qid": 0, 00:17:24.104 "state": "enabled", 00:17:24.104 "thread": "nvmf_tgt_poll_group_000", 00:17:24.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:24.104 "listen_address": { 00:17:24.104 "trtype": "TCP", 00:17:24.104 "adrfam": "IPv4", 00:17:24.104 "traddr": "10.0.0.2", 00:17:24.104 "trsvcid": "4420" 00:17:24.104 }, 00:17:24.104 "peer_address": { 00:17:24.104 "trtype": "TCP", 00:17:24.104 "adrfam": "IPv4", 00:17:24.104 "traddr": "10.0.0.1", 00:17:24.104 "trsvcid": "48568" 00:17:24.104 }, 00:17:24.104 "auth": { 00:17:24.104 "state": "completed", 00:17:24.104 "digest": "sha256", 00:17:24.104 "dhgroup": "ffdhe2048" 00:17:24.104 } 00:17:24.104 } 00:17:24.104 ]' 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.104 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.365 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:24.365 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:25.299 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.299 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.299 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.299 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.299 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.299 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.299 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.299 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.557 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.122 00:17:26.122 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.122 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.122 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.380 { 00:17:26.380 "cntlid": 15, 00:17:26.380 "qid": 0, 00:17:26.380 "state": "enabled", 00:17:26.380 "thread": "nvmf_tgt_poll_group_000", 00:17:26.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:26.380 "listen_address": { 00:17:26.380 "trtype": "TCP", 00:17:26.380 "adrfam": "IPv4", 00:17:26.380 "traddr": "10.0.0.2", 00:17:26.380 "trsvcid": "4420" 00:17:26.380 }, 00:17:26.380 "peer_address": { 00:17:26.380 "trtype": "TCP", 00:17:26.380 "adrfam": "IPv4", 00:17:26.380 "traddr": "10.0.0.1", 00:17:26.380 "trsvcid": "48594" 00:17:26.380 }, 00:17:26.380 "auth": { 00:17:26.380 "state": "completed", 00:17:26.380 "digest": "sha256", 00:17:26.380 "dhgroup": "ffdhe2048" 00:17:26.380 } 00:17:26.380 } 00:17:26.380 ]' 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.380 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.637 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:26.637 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:27.570 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.570 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:27.570 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.570 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.570 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.570 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.570 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.570 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.570 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.828 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.394 00:17:28.394 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.394 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.394 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.651 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.651 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.652 { 00:17:28.652 "cntlid": 17, 00:17:28.652 "qid": 0, 00:17:28.652 "state": "enabled", 00:17:28.652 "thread": "nvmf_tgt_poll_group_000", 00:17:28.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:28.652 "listen_address": { 00:17:28.652 "trtype": "TCP", 00:17:28.652 "adrfam": "IPv4", 00:17:28.652 "traddr": "10.0.0.2", 00:17:28.652 "trsvcid": "4420" 00:17:28.652 }, 00:17:28.652 "peer_address": { 00:17:28.652 "trtype": "TCP", 00:17:28.652 "adrfam": "IPv4", 00:17:28.652 "traddr": "10.0.0.1", 00:17:28.652 "trsvcid": "48610" 00:17:28.652 }, 00:17:28.652 "auth": { 00:17:28.652 "state": "completed", 00:17:28.652 "digest": "sha256", 00:17:28.652 "dhgroup": "ffdhe3072" 00:17:28.652 } 00:17:28.652 } 00:17:28.652 ]' 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.652 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.909 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:28.910 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:29.842 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.842 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:29.842 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.842 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.842 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.842 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.842 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.842 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.100 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:30.100 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.100 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.100 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:30.100 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.100 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.100 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.100 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.100 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.358 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.358 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.358 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.358 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.617 00:17:30.617 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.617 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.617 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.875 { 00:17:30.875 "cntlid": 19, 00:17:30.875 "qid": 0, 00:17:30.875 "state": "enabled", 00:17:30.875 "thread": "nvmf_tgt_poll_group_000", 00:17:30.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:30.875 "listen_address": { 00:17:30.875 "trtype": "TCP", 00:17:30.875 "adrfam": "IPv4", 00:17:30.875 "traddr": "10.0.0.2", 00:17:30.875 "trsvcid": "4420" 00:17:30.875 }, 00:17:30.875 "peer_address": { 00:17:30.875 "trtype": "TCP", 00:17:30.875 "adrfam": "IPv4", 00:17:30.875 "traddr": "10.0.0.1", 00:17:30.875 "trsvcid": "48638" 00:17:30.875 }, 00:17:30.875 "auth": { 00:17:30.875 "state": "completed", 00:17:30.875 "digest": "sha256", 00:17:30.875 "dhgroup": "ffdhe3072" 00:17:30.875 } 00:17:30.875 } 00:17:30.875 ]' 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.875 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.133 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:31.133 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:32.067 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.067 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.067 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.067 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.325 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.325 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.325 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.325 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.583 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.841 00:17:32.841 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.841 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.841 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.099 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.099 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.099 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.099 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.099 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.099 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.099 { 00:17:33.099 "cntlid": 21, 00:17:33.099 "qid": 0, 00:17:33.099 "state": "enabled", 00:17:33.099 "thread": "nvmf_tgt_poll_group_000", 00:17:33.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:33.099 "listen_address": { 00:17:33.099 "trtype": "TCP", 00:17:33.099 "adrfam": "IPv4", 00:17:33.099 "traddr": "10.0.0.2", 00:17:33.099 "trsvcid": "4420" 00:17:33.099 }, 00:17:33.099 "peer_address": { 00:17:33.099 "trtype": "TCP", 00:17:33.099 "adrfam": "IPv4", 00:17:33.099 "traddr": "10.0.0.1", 00:17:33.099 "trsvcid": "55596" 00:17:33.099 }, 00:17:33.099 "auth": { 00:17:33.099 "state": "completed", 00:17:33.099 "digest": "sha256", 00:17:33.099 "dhgroup": "ffdhe3072" 00:17:33.099 } 00:17:33.099 } 00:17:33.099 ]' 00:17:33.099 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.357 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.357 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.357 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.357 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.357 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.357 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.357 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.615 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:33.615 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:34.548 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.548 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:34.548 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.548 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.548 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.548 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.548 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.548 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.806 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:35.064 00:17:35.322 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.322 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.322 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.580 { 00:17:35.580 "cntlid": 23, 00:17:35.580 "qid": 0, 00:17:35.580 "state": "enabled", 00:17:35.580 "thread": "nvmf_tgt_poll_group_000", 00:17:35.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:35.580 "listen_address": { 00:17:35.580 "trtype": "TCP", 00:17:35.580 "adrfam": "IPv4", 00:17:35.580 "traddr": "10.0.0.2", 00:17:35.580 "trsvcid": "4420" 00:17:35.580 }, 00:17:35.580 "peer_address": { 00:17:35.580 "trtype": "TCP", 00:17:35.580 "adrfam": "IPv4", 00:17:35.580 "traddr": "10.0.0.1", 00:17:35.580 "trsvcid": "55614" 00:17:35.580 }, 00:17:35.580 "auth": { 00:17:35.580 "state": "completed", 00:17:35.580 "digest": "sha256", 00:17:35.580 "dhgroup": "ffdhe3072" 00:17:35.580 } 00:17:35.580 } 00:17:35.580 ]' 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.580 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.838 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:35.838 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:36.772 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.772 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:36.772 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.772 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.772 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.772 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.772 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.772 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.772 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.030 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.595 00:17:37.595 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.595 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.595 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.853 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.853 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.853 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.853 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.853 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.853 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.853 { 00:17:37.853 "cntlid": 25, 00:17:37.853 "qid": 0, 00:17:37.853 "state": "enabled", 00:17:37.853 "thread": "nvmf_tgt_poll_group_000", 00:17:37.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:37.853 "listen_address": { 00:17:37.853 "trtype": "TCP", 00:17:37.853 "adrfam": "IPv4", 00:17:37.853 "traddr": "10.0.0.2", 00:17:37.853 "trsvcid": "4420" 00:17:37.853 }, 00:17:37.853 "peer_address": { 00:17:37.853 "trtype": "TCP", 00:17:37.853 "adrfam": "IPv4", 00:17:37.853 "traddr": "10.0.0.1", 00:17:37.853 "trsvcid": "55632" 00:17:37.853 }, 00:17:37.853 "auth": { 00:17:37.853 "state": "completed", 00:17:37.853 "digest": "sha256", 00:17:37.853 "dhgroup": "ffdhe4096" 00:17:37.853 } 00:17:37.853 } 00:17:37.853 ]' 00:17:37.853 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.853 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.853 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.853 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.853 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.853 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.853 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.853 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.111 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:38.111 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:39.045 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.045 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:39.045 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.045 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.045 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.045 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.045 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.045 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.303 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.869 00:17:39.869 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.869 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.869 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.126 { 00:17:40.126 "cntlid": 27, 00:17:40.126 "qid": 0, 00:17:40.126 "state": "enabled", 00:17:40.126 "thread": "nvmf_tgt_poll_group_000", 00:17:40.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:40.126 "listen_address": { 00:17:40.126 "trtype": "TCP", 00:17:40.126 "adrfam": "IPv4", 00:17:40.126 "traddr": "10.0.0.2", 00:17:40.126 "trsvcid": "4420" 00:17:40.126 }, 00:17:40.126 "peer_address": { 00:17:40.126 "trtype": "TCP", 00:17:40.126 "adrfam": "IPv4", 00:17:40.126 "traddr": "10.0.0.1", 00:17:40.126 "trsvcid": "55648" 00:17:40.126 }, 00:17:40.126 "auth": { 00:17:40.126 "state": "completed", 00:17:40.126 "digest": "sha256", 00:17:40.126 "dhgroup": "ffdhe4096" 00:17:40.126 } 00:17:40.126 } 00:17:40.126 ]' 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.126 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.384 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:40.384 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:41.318 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.318 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:41.318 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.318 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.318 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.318 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.318 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.318 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.884 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.144 00:17:42.144 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.144 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.144 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.434 { 00:17:42.434 "cntlid": 29, 00:17:42.434 "qid": 0, 00:17:42.434 "state": "enabled", 00:17:42.434 "thread": "nvmf_tgt_poll_group_000", 00:17:42.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:42.434 "listen_address": { 00:17:42.434 "trtype": "TCP", 00:17:42.434 "adrfam": "IPv4", 00:17:42.434 "traddr": "10.0.0.2", 00:17:42.434 "trsvcid": "4420" 00:17:42.434 }, 00:17:42.434 "peer_address": { 00:17:42.434 "trtype": "TCP", 00:17:42.434 "adrfam": "IPv4", 00:17:42.434 "traddr": "10.0.0.1", 00:17:42.434 "trsvcid": "55678" 00:17:42.434 }, 00:17:42.434 "auth": { 00:17:42.434 "state": "completed", 00:17:42.434 "digest": "sha256", 00:17:42.434 "dhgroup": "ffdhe4096" 00:17:42.434 } 00:17:42.434 } 00:17:42.434 ]' 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.434 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.719 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.719 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.719 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.719 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:42.719 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:43.653 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.653 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:43.653 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.653 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.653 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.653 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.653 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.653 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.912 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.478 00:17:44.478 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.478 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.478 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.736 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.736 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.736 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.736 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.736 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.737 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.737 { 00:17:44.737 "cntlid": 31, 00:17:44.737 "qid": 0, 00:17:44.737 "state": "enabled", 00:17:44.737 "thread": "nvmf_tgt_poll_group_000", 00:17:44.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:44.737 "listen_address": { 00:17:44.737 "trtype": "TCP", 00:17:44.737 "adrfam": "IPv4", 00:17:44.737 "traddr": "10.0.0.2", 00:17:44.737 "trsvcid": "4420" 00:17:44.737 }, 00:17:44.737 "peer_address": { 00:17:44.737 "trtype": "TCP", 00:17:44.737 "adrfam": "IPv4", 00:17:44.737 "traddr": "10.0.0.1", 00:17:44.737 "trsvcid": "55492" 00:17:44.737 }, 00:17:44.737 "auth": { 00:17:44.737 "state": "completed", 00:17:44.737 "digest": "sha256", 00:17:44.737 "dhgroup": "ffdhe4096" 00:17:44.737 } 00:17:44.737 } 00:17:44.737 ]' 00:17:44.737 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.737 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.737 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.737 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.737 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.737 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.737 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.737 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.995 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:44.995 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:45.934 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.934 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:45.934 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.934 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.934 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.934 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.934 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.934 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:45.934 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.499 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.065 00:17:47.065 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.065 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.065 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.322 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.323 { 00:17:47.323 "cntlid": 33, 00:17:47.323 "qid": 0, 00:17:47.323 "state": "enabled", 00:17:47.323 "thread": "nvmf_tgt_poll_group_000", 00:17:47.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:47.323 "listen_address": { 00:17:47.323 "trtype": "TCP", 00:17:47.323 "adrfam": "IPv4", 00:17:47.323 "traddr": "10.0.0.2", 00:17:47.323 "trsvcid": "4420" 00:17:47.323 }, 00:17:47.323 "peer_address": { 00:17:47.323 "trtype": "TCP", 00:17:47.323 "adrfam": "IPv4", 00:17:47.323 "traddr": "10.0.0.1", 00:17:47.323 "trsvcid": "55510" 00:17:47.323 }, 00:17:47.323 "auth": { 00:17:47.323 "state": "completed", 00:17:47.323 "digest": "sha256", 00:17:47.323 "dhgroup": "ffdhe6144" 00:17:47.323 } 00:17:47.323 } 00:17:47.323 ]' 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.323 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.580 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:47.580 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:48.511 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.511 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.511 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.511 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.511 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.511 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.511 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.511 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.768 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.334 00:17:49.334 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.334 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.334 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.592 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.592 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.592 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.592 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.592 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.592 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.592 { 00:17:49.592 "cntlid": 35, 00:17:49.592 "qid": 0, 00:17:49.592 "state": "enabled", 00:17:49.592 "thread": "nvmf_tgt_poll_group_000", 00:17:49.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:49.592 "listen_address": { 00:17:49.592 "trtype": "TCP", 00:17:49.592 "adrfam": "IPv4", 00:17:49.592 "traddr": "10.0.0.2", 00:17:49.592 "trsvcid": "4420" 00:17:49.592 }, 00:17:49.592 "peer_address": { 00:17:49.592 "trtype": "TCP", 00:17:49.592 "adrfam": "IPv4", 00:17:49.592 "traddr": "10.0.0.1", 00:17:49.592 "trsvcid": "55538" 00:17:49.592 }, 00:17:49.592 "auth": { 00:17:49.592 "state": "completed", 00:17:49.592 "digest": "sha256", 00:17:49.592 "dhgroup": "ffdhe6144" 00:17:49.592 } 00:17:49.592 } 00:17:49.592 ]' 00:17:49.592 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.592 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.592 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.851 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.851 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.851 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.851 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.851 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.109 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:50.109 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:17:51.042 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.042 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:51.042 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.042 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.042 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.042 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.042 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.042 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.300 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.866 00:17:51.866 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.866 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.866 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.124 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.124 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.124 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.124 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.124 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.124 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.124 { 00:17:52.124 "cntlid": 37, 00:17:52.124 "qid": 0, 00:17:52.124 "state": "enabled", 00:17:52.124 "thread": "nvmf_tgt_poll_group_000", 00:17:52.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:52.124 "listen_address": { 00:17:52.124 "trtype": "TCP", 00:17:52.124 "adrfam": "IPv4", 00:17:52.124 "traddr": "10.0.0.2", 00:17:52.124 "trsvcid": "4420" 00:17:52.124 }, 00:17:52.124 "peer_address": { 00:17:52.124 "trtype": "TCP", 00:17:52.124 "adrfam": "IPv4", 00:17:52.124 "traddr": "10.0.0.1", 00:17:52.124 "trsvcid": "55570" 00:17:52.124 }, 00:17:52.124 "auth": { 00:17:52.124 "state": "completed", 00:17:52.124 "digest": "sha256", 00:17:52.124 "dhgroup": "ffdhe6144" 00:17:52.124 } 00:17:52.124 } 00:17:52.124 ]' 00:17:52.124 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.124 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.125 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.125 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.125 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.125 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.125 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.125 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.690 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:52.690 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:17:53.257 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.257 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.258 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.515 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.515 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.515 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.515 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.515 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.773 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:54.339 00:17:54.339 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.339 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.339 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.597 { 00:17:54.597 "cntlid": 39, 00:17:54.597 "qid": 0, 00:17:54.597 "state": "enabled", 00:17:54.597 "thread": "nvmf_tgt_poll_group_000", 00:17:54.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:54.597 "listen_address": { 00:17:54.597 "trtype": "TCP", 00:17:54.597 "adrfam": "IPv4", 00:17:54.597 "traddr": "10.0.0.2", 00:17:54.597 "trsvcid": "4420" 00:17:54.597 }, 00:17:54.597 "peer_address": { 00:17:54.597 "trtype": "TCP", 00:17:54.597 "adrfam": "IPv4", 00:17:54.597 "traddr": "10.0.0.1", 00:17:54.597 "trsvcid": "35116" 00:17:54.597 }, 00:17:54.597 "auth": { 00:17:54.597 "state": "completed", 00:17:54.597 "digest": "sha256", 00:17:54.597 "dhgroup": "ffdhe6144" 00:17:54.597 } 00:17:54.597 } 00:17:54.597 ]' 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.597 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.855 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:54.855 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:17:55.788 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.788 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:55.788 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.788 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.788 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.788 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.788 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.788 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.788 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.046 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.978 00:17:56.978 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.978 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.978 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.236 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.236 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.236 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.236 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.237 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.237 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.237 { 00:17:57.237 "cntlid": 41, 00:17:57.237 "qid": 0, 00:17:57.237 "state": "enabled", 00:17:57.237 "thread": "nvmf_tgt_poll_group_000", 00:17:57.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:57.237 "listen_address": { 00:17:57.237 "trtype": "TCP", 00:17:57.237 "adrfam": "IPv4", 00:17:57.237 "traddr": "10.0.0.2", 00:17:57.237 "trsvcid": "4420" 00:17:57.237 }, 00:17:57.237 "peer_address": { 00:17:57.237 "trtype": "TCP", 00:17:57.237 "adrfam": "IPv4", 00:17:57.237 "traddr": "10.0.0.1", 00:17:57.237 "trsvcid": "35130" 00:17:57.237 }, 00:17:57.237 "auth": { 00:17:57.237 "state": "completed", 00:17:57.237 "digest": "sha256", 00:17:57.237 "dhgroup": "ffdhe8192" 00:17:57.237 } 00:17:57.237 } 00:17:57.237 ]' 00:17:57.237 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.237 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.237 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.237 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:57.237 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.494 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.494 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.494 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.752 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:57.752 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:17:58.686 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.686 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:58.686 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.686 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.686 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.686 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.686 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.686 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.944 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.879 00:17:59.879 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.879 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.879 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.139 { 00:18:00.139 "cntlid": 43, 00:18:00.139 "qid": 0, 00:18:00.139 "state": "enabled", 00:18:00.139 "thread": "nvmf_tgt_poll_group_000", 00:18:00.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:00.139 "listen_address": { 00:18:00.139 "trtype": "TCP", 00:18:00.139 "adrfam": "IPv4", 00:18:00.139 "traddr": "10.0.0.2", 00:18:00.139 "trsvcid": "4420" 00:18:00.139 }, 00:18:00.139 "peer_address": { 00:18:00.139 "trtype": "TCP", 00:18:00.139 "adrfam": "IPv4", 00:18:00.139 "traddr": "10.0.0.1", 00:18:00.139 "trsvcid": "35136" 00:18:00.139 }, 00:18:00.139 "auth": { 00:18:00.139 "state": "completed", 00:18:00.139 "digest": "sha256", 00:18:00.139 "dhgroup": "ffdhe8192" 00:18:00.139 } 00:18:00.139 } 00:18:00.139 ]' 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.139 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.397 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:00.397 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:01.331 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.331 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:01.331 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.331 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.331 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.331 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.331 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.331 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.589 08:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.523 00:18:02.523 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.523 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.523 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.782 { 00:18:02.782 "cntlid": 45, 00:18:02.782 "qid": 0, 00:18:02.782 "state": "enabled", 00:18:02.782 "thread": "nvmf_tgt_poll_group_000", 00:18:02.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:02.782 "listen_address": { 00:18:02.782 "trtype": "TCP", 00:18:02.782 "adrfam": "IPv4", 00:18:02.782 "traddr": "10.0.0.2", 00:18:02.782 "trsvcid": "4420" 00:18:02.782 }, 00:18:02.782 "peer_address": { 00:18:02.782 "trtype": "TCP", 00:18:02.782 "adrfam": "IPv4", 00:18:02.782 "traddr": "10.0.0.1", 00:18:02.782 "trsvcid": "35170" 00:18:02.782 }, 00:18:02.782 "auth": { 00:18:02.782 "state": "completed", 00:18:02.782 "digest": "sha256", 00:18:02.782 "dhgroup": "ffdhe8192" 00:18:02.782 } 00:18:02.782 } 00:18:02.782 ]' 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.782 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.782 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.782 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.782 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.040 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:03.040 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:03.974 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.974 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.974 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.974 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.974 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.974 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.974 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.974 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.232 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.166 00:18:05.166 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.166 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.166 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.424 { 00:18:05.424 "cntlid": 47, 00:18:05.424 "qid": 0, 00:18:05.424 "state": "enabled", 00:18:05.424 "thread": "nvmf_tgt_poll_group_000", 00:18:05.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:05.424 "listen_address": { 00:18:05.424 "trtype": "TCP", 00:18:05.424 "adrfam": "IPv4", 00:18:05.424 "traddr": "10.0.0.2", 00:18:05.424 "trsvcid": "4420" 00:18:05.424 }, 00:18:05.424 "peer_address": { 00:18:05.424 "trtype": "TCP", 00:18:05.424 "adrfam": "IPv4", 00:18:05.424 "traddr": "10.0.0.1", 00:18:05.424 "trsvcid": "45362" 00:18:05.424 }, 00:18:05.424 "auth": { 00:18:05.424 "state": "completed", 00:18:05.424 "digest": "sha256", 00:18:05.424 "dhgroup": "ffdhe8192" 00:18:05.424 } 00:18:05.424 } 00:18:05.424 ]' 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.424 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.990 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:05.990 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:06.560 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.824 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:06.824 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.824 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.824 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.824 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:06.824 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.824 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.824 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.824 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.083 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.341 00:18:07.341 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.341 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.341 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.599 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.599 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.599 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.599 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.599 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.599 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.599 { 00:18:07.599 "cntlid": 49, 00:18:07.599 "qid": 0, 00:18:07.599 "state": "enabled", 00:18:07.599 "thread": "nvmf_tgt_poll_group_000", 00:18:07.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:07.599 "listen_address": { 00:18:07.599 "trtype": "TCP", 00:18:07.599 "adrfam": "IPv4", 00:18:07.599 "traddr": "10.0.0.2", 00:18:07.599 "trsvcid": "4420" 00:18:07.599 }, 00:18:07.599 "peer_address": { 00:18:07.599 "trtype": "TCP", 00:18:07.599 "adrfam": "IPv4", 00:18:07.599 "traddr": "10.0.0.1", 00:18:07.599 "trsvcid": "45376" 00:18:07.599 }, 00:18:07.599 "auth": { 00:18:07.599 "state": "completed", 00:18:07.599 "digest": "sha384", 00:18:07.599 "dhgroup": "null" 00:18:07.599 } 00:18:07.599 } 00:18:07.599 ]' 00:18:07.599 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.599 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.599 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.857 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:07.857 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.857 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.857 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.857 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.115 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:08.115 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:09.048 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.048 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:09.048 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.048 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.048 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.048 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.048 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:09.048 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.307 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.564 00:18:09.564 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.564 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.564 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.130 { 00:18:10.130 "cntlid": 51, 00:18:10.130 "qid": 0, 00:18:10.130 "state": "enabled", 00:18:10.130 "thread": "nvmf_tgt_poll_group_000", 00:18:10.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:10.130 "listen_address": { 00:18:10.130 "trtype": "TCP", 00:18:10.130 "adrfam": "IPv4", 00:18:10.130 "traddr": "10.0.0.2", 00:18:10.130 "trsvcid": "4420" 00:18:10.130 }, 00:18:10.130 "peer_address": { 00:18:10.130 "trtype": "TCP", 00:18:10.130 "adrfam": "IPv4", 00:18:10.130 "traddr": "10.0.0.1", 00:18:10.130 "trsvcid": "45414" 00:18:10.130 }, 00:18:10.130 "auth": { 00:18:10.130 "state": "completed", 00:18:10.130 "digest": "sha384", 00:18:10.130 "dhgroup": "null" 00:18:10.130 } 00:18:10.130 } 00:18:10.130 ]' 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.130 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.389 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:10.389 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:11.322 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.322 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:11.322 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.322 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.322 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.322 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.322 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.322 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.580 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.146 00:18:12.146 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.146 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.146 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.146 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.146 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.146 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.146 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.444 { 00:18:12.444 "cntlid": 53, 00:18:12.444 "qid": 0, 00:18:12.444 "state": "enabled", 00:18:12.444 "thread": "nvmf_tgt_poll_group_000", 00:18:12.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:12.444 "listen_address": { 00:18:12.444 "trtype": "TCP", 00:18:12.444 "adrfam": "IPv4", 00:18:12.444 "traddr": "10.0.0.2", 00:18:12.444 "trsvcid": "4420" 00:18:12.444 }, 00:18:12.444 "peer_address": { 00:18:12.444 "trtype": "TCP", 00:18:12.444 "adrfam": "IPv4", 00:18:12.444 "traddr": "10.0.0.1", 00:18:12.444 "trsvcid": "45438" 00:18:12.444 }, 00:18:12.444 "auth": { 00:18:12.444 "state": "completed", 00:18:12.444 "digest": "sha384", 00:18:12.444 "dhgroup": "null" 00:18:12.444 } 00:18:12.444 } 00:18:12.444 ]' 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.444 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.774 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:12.774 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:13.707 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.707 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:13.707 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.707 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.707 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.707 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.707 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.707 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.964 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.223 00:18:14.223 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.223 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.223 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.481 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.481 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.481 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.481 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.481 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.481 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.481 { 00:18:14.481 "cntlid": 55, 00:18:14.481 "qid": 0, 00:18:14.481 "state": "enabled", 00:18:14.481 "thread": "nvmf_tgt_poll_group_000", 00:18:14.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:14.481 "listen_address": { 00:18:14.481 "trtype": "TCP", 00:18:14.481 "adrfam": "IPv4", 00:18:14.481 "traddr": "10.0.0.2", 00:18:14.481 "trsvcid": "4420" 00:18:14.481 }, 00:18:14.481 "peer_address": { 00:18:14.481 "trtype": "TCP", 00:18:14.481 "adrfam": "IPv4", 00:18:14.481 "traddr": "10.0.0.1", 00:18:14.481 "trsvcid": "41276" 00:18:14.481 }, 00:18:14.481 "auth": { 00:18:14.481 "state": "completed", 00:18:14.481 "digest": "sha384", 00:18:14.481 "dhgroup": "null" 00:18:14.481 } 00:18:14.481 } 00:18:14.481 ]' 00:18:14.481 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.481 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.481 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.739 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:14.739 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.739 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.739 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.739 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.997 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:14.997 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:15.930 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.930 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:15.930 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.930 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.930 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.930 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:15.930 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.930 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.930 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.188 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.446 00:18:16.446 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.446 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.446 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.704 { 00:18:16.704 "cntlid": 57, 00:18:16.704 "qid": 0, 00:18:16.704 "state": "enabled", 00:18:16.704 "thread": "nvmf_tgt_poll_group_000", 00:18:16.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:16.704 "listen_address": { 00:18:16.704 "trtype": "TCP", 00:18:16.704 "adrfam": "IPv4", 00:18:16.704 "traddr": "10.0.0.2", 00:18:16.704 "trsvcid": "4420" 00:18:16.704 }, 00:18:16.704 "peer_address": { 00:18:16.704 "trtype": "TCP", 00:18:16.704 "adrfam": "IPv4", 00:18:16.704 "traddr": "10.0.0.1", 00:18:16.704 "trsvcid": "41304" 00:18:16.704 }, 00:18:16.704 "auth": { 00:18:16.704 "state": "completed", 00:18:16.704 "digest": "sha384", 00:18:16.704 "dhgroup": "ffdhe2048" 00:18:16.704 } 00:18:16.704 } 00:18:16.704 ]' 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.704 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.270 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:17.270 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.202 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.768 00:18:18.768 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.768 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.768 08:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.025 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.025 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.025 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.025 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.025 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.025 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.025 { 00:18:19.025 "cntlid": 59, 00:18:19.025 "qid": 0, 00:18:19.025 "state": "enabled", 00:18:19.025 "thread": "nvmf_tgt_poll_group_000", 00:18:19.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:19.025 "listen_address": { 00:18:19.025 "trtype": "TCP", 00:18:19.025 "adrfam": "IPv4", 00:18:19.025 "traddr": "10.0.0.2", 00:18:19.025 "trsvcid": "4420" 00:18:19.025 }, 00:18:19.025 "peer_address": { 00:18:19.025 "trtype": "TCP", 00:18:19.025 "adrfam": "IPv4", 00:18:19.025 "traddr": "10.0.0.1", 00:18:19.025 "trsvcid": "41318" 00:18:19.025 }, 00:18:19.025 "auth": { 00:18:19.025 "state": "completed", 00:18:19.025 "digest": "sha384", 00:18:19.025 "dhgroup": "ffdhe2048" 00:18:19.025 } 00:18:19.025 } 00:18:19.025 ]' 00:18:19.025 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.025 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.026 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.026 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.026 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.026 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.026 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.026 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.283 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:19.283 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:20.216 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.216 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:20.216 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.216 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.216 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.216 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.216 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.216 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.781 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.040 00:18:21.040 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.040 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.040 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.297 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.297 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.297 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.297 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.297 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.297 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.297 { 00:18:21.297 "cntlid": 61, 00:18:21.297 "qid": 0, 00:18:21.297 "state": "enabled", 00:18:21.297 "thread": "nvmf_tgt_poll_group_000", 00:18:21.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:21.297 "listen_address": { 00:18:21.297 "trtype": "TCP", 00:18:21.297 "adrfam": "IPv4", 00:18:21.297 "traddr": "10.0.0.2", 00:18:21.297 "trsvcid": "4420" 00:18:21.297 }, 00:18:21.297 "peer_address": { 00:18:21.297 "trtype": "TCP", 00:18:21.297 "adrfam": "IPv4", 00:18:21.297 "traddr": "10.0.0.1", 00:18:21.297 "trsvcid": "41344" 00:18:21.297 }, 00:18:21.297 "auth": { 00:18:21.297 "state": "completed", 00:18:21.297 "digest": "sha384", 00:18:21.297 "dhgroup": "ffdhe2048" 00:18:21.297 } 00:18:21.297 } 00:18:21.297 ]' 00:18:21.297 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.298 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.298 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.298 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:21.298 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.298 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.298 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.298 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.862 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:21.862 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:22.795 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.795 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:22.795 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.795 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.795 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.795 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.795 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:22.795 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.052 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:23.053 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.053 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:23.310 00:18:23.310 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.310 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.310 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.568 { 00:18:23.568 "cntlid": 63, 00:18:23.568 "qid": 0, 00:18:23.568 "state": "enabled", 00:18:23.568 "thread": "nvmf_tgt_poll_group_000", 00:18:23.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:23.568 "listen_address": { 00:18:23.568 "trtype": "TCP", 00:18:23.568 "adrfam": "IPv4", 00:18:23.568 "traddr": "10.0.0.2", 00:18:23.568 "trsvcid": "4420" 00:18:23.568 }, 00:18:23.568 "peer_address": { 00:18:23.568 "trtype": "TCP", 00:18:23.568 "adrfam": "IPv4", 00:18:23.568 "traddr": "10.0.0.1", 00:18:23.568 "trsvcid": "53866" 00:18:23.568 }, 00:18:23.568 "auth": { 00:18:23.568 "state": "completed", 00:18:23.568 "digest": "sha384", 00:18:23.568 "dhgroup": "ffdhe2048" 00:18:23.568 } 00:18:23.568 } 00:18:23.568 ]' 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:23.568 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.826 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.826 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.826 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.083 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:24.083 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:25.016 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.016 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:25.016 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.016 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.016 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.016 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.016 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.016 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.016 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.274 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.531 00:18:25.531 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.531 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.531 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.789 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.789 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.789 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.789 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.789 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.789 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.789 { 00:18:25.789 "cntlid": 65, 00:18:25.789 "qid": 0, 00:18:25.789 "state": "enabled", 00:18:25.789 "thread": "nvmf_tgt_poll_group_000", 00:18:25.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:25.789 "listen_address": { 00:18:25.789 "trtype": "TCP", 00:18:25.789 "adrfam": "IPv4", 00:18:25.789 "traddr": "10.0.0.2", 00:18:25.789 "trsvcid": "4420" 00:18:25.789 }, 00:18:25.789 "peer_address": { 00:18:25.789 "trtype": "TCP", 00:18:25.789 "adrfam": "IPv4", 00:18:25.789 "traddr": "10.0.0.1", 00:18:25.789 "trsvcid": "53884" 00:18:25.789 }, 00:18:25.789 "auth": { 00:18:25.789 "state": "completed", 00:18:25.789 "digest": "sha384", 00:18:25.789 "dhgroup": "ffdhe3072" 00:18:25.789 } 00:18:25.789 } 00:18:25.789 ]' 00:18:25.789 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.789 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.789 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.789 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.789 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.047 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.047 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.047 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.305 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:26.305 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:27.242 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.242 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:27.242 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.242 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.242 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.242 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.242 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.243 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.500 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.757 00:18:27.757 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.757 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.757 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.015 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.015 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.015 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.015 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.015 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.015 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.015 { 00:18:28.015 "cntlid": 67, 00:18:28.015 "qid": 0, 00:18:28.015 "state": "enabled", 00:18:28.015 "thread": "nvmf_tgt_poll_group_000", 00:18:28.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:28.015 "listen_address": { 00:18:28.015 "trtype": "TCP", 00:18:28.015 "adrfam": "IPv4", 00:18:28.015 "traddr": "10.0.0.2", 00:18:28.015 "trsvcid": "4420" 00:18:28.015 }, 00:18:28.015 "peer_address": { 00:18:28.015 "trtype": "TCP", 00:18:28.015 "adrfam": "IPv4", 00:18:28.015 "traddr": "10.0.0.1", 00:18:28.015 "trsvcid": "53912" 00:18:28.015 }, 00:18:28.015 "auth": { 00:18:28.015 "state": "completed", 00:18:28.015 "digest": "sha384", 00:18:28.015 "dhgroup": "ffdhe3072" 00:18:28.015 } 00:18:28.015 } 00:18:28.015 ]' 00:18:28.015 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.015 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.015 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.276 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:28.276 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.276 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.277 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.277 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.535 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:28.535 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:29.467 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.467 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:29.467 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.467 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.467 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.467 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.467 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:29.467 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.725 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.983 00:18:29.983 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.983 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.983 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.241 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.241 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.241 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.241 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.241 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.241 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.241 { 00:18:30.241 "cntlid": 69, 00:18:30.241 "qid": 0, 00:18:30.241 "state": "enabled", 00:18:30.241 "thread": "nvmf_tgt_poll_group_000", 00:18:30.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:30.241 "listen_address": { 00:18:30.241 "trtype": "TCP", 00:18:30.241 "adrfam": "IPv4", 00:18:30.241 "traddr": "10.0.0.2", 00:18:30.241 "trsvcid": "4420" 00:18:30.241 }, 00:18:30.241 "peer_address": { 00:18:30.241 "trtype": "TCP", 00:18:30.241 "adrfam": "IPv4", 00:18:30.241 "traddr": "10.0.0.1", 00:18:30.241 "trsvcid": "53930" 00:18:30.241 }, 00:18:30.241 "auth": { 00:18:30.241 "state": "completed", 00:18:30.241 "digest": "sha384", 00:18:30.241 "dhgroup": "ffdhe3072" 00:18:30.241 } 00:18:30.241 } 00:18:30.241 ]' 00:18:30.241 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.241 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.241 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.499 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:30.499 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.499 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.499 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.499 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.757 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:30.757 08:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:31.690 08:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.690 08:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.690 08:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.690 08:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.690 08:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.690 08:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.691 08:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.691 08:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.948 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:32.206 00:18:32.206 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.206 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.206 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.463 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.463 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.463 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.463 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.463 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.463 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.463 { 00:18:32.463 "cntlid": 71, 00:18:32.463 "qid": 0, 00:18:32.463 "state": "enabled", 00:18:32.463 "thread": "nvmf_tgt_poll_group_000", 00:18:32.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:32.463 "listen_address": { 00:18:32.463 "trtype": "TCP", 00:18:32.463 "adrfam": "IPv4", 00:18:32.463 "traddr": "10.0.0.2", 00:18:32.463 "trsvcid": "4420" 00:18:32.464 }, 00:18:32.464 "peer_address": { 00:18:32.464 "trtype": "TCP", 00:18:32.464 "adrfam": "IPv4", 00:18:32.464 "traddr": "10.0.0.1", 00:18:32.464 "trsvcid": "53952" 00:18:32.464 }, 00:18:32.464 "auth": { 00:18:32.464 "state": "completed", 00:18:32.464 "digest": "sha384", 00:18:32.464 "dhgroup": "ffdhe3072" 00:18:32.464 } 00:18:32.464 } 00:18:32.464 ]' 00:18:32.464 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.464 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.464 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.464 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.464 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.721 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.721 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.721 08:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.979 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:32.979 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:33.911 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.911 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:33.911 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.911 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.911 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.911 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.911 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.911 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.911 08:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:34.169 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.170 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:34.427 00:18:34.427 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.427 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.427 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.685 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.685 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.685 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.685 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.685 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.685 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.685 { 00:18:34.685 "cntlid": 73, 00:18:34.685 "qid": 0, 00:18:34.685 "state": "enabled", 00:18:34.685 "thread": "nvmf_tgt_poll_group_000", 00:18:34.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:34.685 "listen_address": { 00:18:34.685 "trtype": "TCP", 00:18:34.685 "adrfam": "IPv4", 00:18:34.685 "traddr": "10.0.0.2", 00:18:34.685 "trsvcid": "4420" 00:18:34.685 }, 00:18:34.685 "peer_address": { 00:18:34.685 "trtype": "TCP", 00:18:34.685 "adrfam": "IPv4", 00:18:34.685 "traddr": "10.0.0.1", 00:18:34.685 "trsvcid": "60166" 00:18:34.685 }, 00:18:34.685 "auth": { 00:18:34.685 "state": "completed", 00:18:34.685 "digest": "sha384", 00:18:34.685 "dhgroup": "ffdhe4096" 00:18:34.685 } 00:18:34.685 } 00:18:34.685 ]' 00:18:34.685 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.944 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.944 08:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.944 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:34.944 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.944 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.944 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.944 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.202 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:35.202 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:36.137 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.137 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:36.137 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.137 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.138 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.138 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.138 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.138 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.395 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.961 00:18:36.961 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.961 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.961 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.219 { 00:18:37.219 "cntlid": 75, 00:18:37.219 "qid": 0, 00:18:37.219 "state": "enabled", 00:18:37.219 "thread": "nvmf_tgt_poll_group_000", 00:18:37.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:37.219 "listen_address": { 00:18:37.219 "trtype": "TCP", 00:18:37.219 "adrfam": "IPv4", 00:18:37.219 "traddr": "10.0.0.2", 00:18:37.219 "trsvcid": "4420" 00:18:37.219 }, 00:18:37.219 "peer_address": { 00:18:37.219 "trtype": "TCP", 00:18:37.219 "adrfam": "IPv4", 00:18:37.219 "traddr": "10.0.0.1", 00:18:37.219 "trsvcid": "60196" 00:18:37.219 }, 00:18:37.219 "auth": { 00:18:37.219 "state": "completed", 00:18:37.219 "digest": "sha384", 00:18:37.219 "dhgroup": "ffdhe4096" 00:18:37.219 } 00:18:37.219 } 00:18:37.219 ]' 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.219 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.477 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:37.477 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:38.410 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.410 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.410 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.410 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.410 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.410 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.410 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.410 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.668 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.233 00:18:39.233 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.233 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.233 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.491 { 00:18:39.491 "cntlid": 77, 00:18:39.491 "qid": 0, 00:18:39.491 "state": "enabled", 00:18:39.491 "thread": "nvmf_tgt_poll_group_000", 00:18:39.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:39.491 "listen_address": { 00:18:39.491 "trtype": "TCP", 00:18:39.491 "adrfam": "IPv4", 00:18:39.491 "traddr": "10.0.0.2", 00:18:39.491 "trsvcid": "4420" 00:18:39.491 }, 00:18:39.491 "peer_address": { 00:18:39.491 "trtype": "TCP", 00:18:39.491 "adrfam": "IPv4", 00:18:39.491 "traddr": "10.0.0.1", 00:18:39.491 "trsvcid": "60214" 00:18:39.491 }, 00:18:39.491 "auth": { 00:18:39.491 "state": "completed", 00:18:39.491 "digest": "sha384", 00:18:39.491 "dhgroup": "ffdhe4096" 00:18:39.491 } 00:18:39.491 } 00:18:39.491 ]' 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.491 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.750 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:39.750 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:40.684 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.684 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:40.684 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.684 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.684 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.684 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.684 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.684 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:40.941 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.942 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.507 00:18:41.507 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.507 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.507 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.765 { 00:18:41.765 "cntlid": 79, 00:18:41.765 "qid": 0, 00:18:41.765 "state": "enabled", 00:18:41.765 "thread": "nvmf_tgt_poll_group_000", 00:18:41.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:41.765 "listen_address": { 00:18:41.765 "trtype": "TCP", 00:18:41.765 "adrfam": "IPv4", 00:18:41.765 "traddr": "10.0.0.2", 00:18:41.765 "trsvcid": "4420" 00:18:41.765 }, 00:18:41.765 "peer_address": { 00:18:41.765 "trtype": "TCP", 00:18:41.765 "adrfam": "IPv4", 00:18:41.765 "traddr": "10.0.0.1", 00:18:41.765 "trsvcid": "60238" 00:18:41.765 }, 00:18:41.765 "auth": { 00:18:41.765 "state": "completed", 00:18:41.765 "digest": "sha384", 00:18:41.765 "dhgroup": "ffdhe4096" 00:18:41.765 } 00:18:41.765 } 00:18:41.765 ]' 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:41.765 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.766 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.766 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.766 08:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.054 08:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:42.054 08:54:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:43.013 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.014 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:43.014 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.014 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.014 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.014 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.014 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.014 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.014 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.272 08:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.837 00:18:43.837 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.837 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.837 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.095 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.095 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.095 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.095 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.095 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.095 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.095 { 00:18:44.095 "cntlid": 81, 00:18:44.095 "qid": 0, 00:18:44.095 "state": "enabled", 00:18:44.095 "thread": "nvmf_tgt_poll_group_000", 00:18:44.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:44.095 "listen_address": { 00:18:44.095 "trtype": "TCP", 00:18:44.095 "adrfam": "IPv4", 00:18:44.095 "traddr": "10.0.0.2", 00:18:44.095 "trsvcid": "4420" 00:18:44.095 }, 00:18:44.095 "peer_address": { 00:18:44.095 "trtype": "TCP", 00:18:44.095 "adrfam": "IPv4", 00:18:44.095 "traddr": "10.0.0.1", 00:18:44.095 "trsvcid": "58734" 00:18:44.095 }, 00:18:44.095 "auth": { 00:18:44.095 "state": "completed", 00:18:44.095 "digest": "sha384", 00:18:44.095 "dhgroup": "ffdhe6144" 00:18:44.095 } 00:18:44.095 } 00:18:44.095 ]' 00:18:44.095 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.095 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.095 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.353 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:44.353 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.353 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.353 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.353 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.611 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:44.611 08:54:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:45.544 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.544 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:45.545 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.545 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.545 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.545 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.545 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.545 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.813 08:54:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.387 00:18:46.387 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.387 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.387 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.643 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.643 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.643 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.643 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.643 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.643 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:46.643 { 00:18:46.643 "cntlid": 83, 00:18:46.643 "qid": 0, 00:18:46.643 "state": "enabled", 00:18:46.643 "thread": "nvmf_tgt_poll_group_000", 00:18:46.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:46.643 "listen_address": { 00:18:46.643 "trtype": "TCP", 00:18:46.643 "adrfam": "IPv4", 00:18:46.643 "traddr": "10.0.0.2", 00:18:46.643 "trsvcid": "4420" 00:18:46.643 }, 00:18:46.643 "peer_address": { 00:18:46.643 "trtype": "TCP", 00:18:46.643 "adrfam": "IPv4", 00:18:46.643 "traddr": "10.0.0.1", 00:18:46.643 "trsvcid": "58760" 00:18:46.643 }, 00:18:46.643 "auth": { 00:18:46.643 "state": "completed", 00:18:46.643 "digest": "sha384", 00:18:46.643 "dhgroup": "ffdhe6144" 00:18:46.643 } 00:18:46.643 } 00:18:46.643 ]' 00:18:46.643 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:46.644 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.644 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:46.644 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:46.644 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:46.644 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.644 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.644 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.901 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:46.901 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:47.831 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.831 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:47.831 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.831 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.831 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.831 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.831 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.831 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:48.091 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:48.091 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.091 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:48.091 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:48.092 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:48.092 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.092 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.092 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.092 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.092 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.092 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.092 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.092 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.658 00:18:48.658 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.658 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.658 08:55:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.915 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.915 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.915 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.915 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.915 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.915 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.915 { 00:18:48.915 "cntlid": 85, 00:18:48.915 "qid": 0, 00:18:48.915 "state": "enabled", 00:18:48.915 "thread": "nvmf_tgt_poll_group_000", 00:18:48.915 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:48.915 "listen_address": { 00:18:48.915 "trtype": "TCP", 00:18:48.915 "adrfam": "IPv4", 00:18:48.915 "traddr": "10.0.0.2", 00:18:48.915 "trsvcid": "4420" 00:18:48.915 }, 00:18:48.915 "peer_address": { 00:18:48.915 "trtype": "TCP", 00:18:48.915 "adrfam": "IPv4", 00:18:48.915 "traddr": "10.0.0.1", 00:18:48.915 "trsvcid": "58806" 00:18:48.915 }, 00:18:48.915 "auth": { 00:18:48.915 "state": "completed", 00:18:48.915 "digest": "sha384", 00:18:48.915 "dhgroup": "ffdhe6144" 00:18:48.915 } 00:18:48.915 } 00:18:48.915 ]' 00:18:48.916 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.916 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.916 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.916 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.212 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.213 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.213 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.213 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.470 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:49.470 08:55:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:18:50.403 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.403 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:50.403 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.403 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.403 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.403 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.403 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.403 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.661 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.227 00:18:51.227 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.227 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.227 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:51.485 { 00:18:51.485 "cntlid": 87, 00:18:51.485 "qid": 0, 00:18:51.485 "state": "enabled", 00:18:51.485 "thread": "nvmf_tgt_poll_group_000", 00:18:51.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:51.485 "listen_address": { 00:18:51.485 "trtype": "TCP", 00:18:51.485 "adrfam": "IPv4", 00:18:51.485 "traddr": "10.0.0.2", 00:18:51.485 "trsvcid": "4420" 00:18:51.485 }, 00:18:51.485 "peer_address": { 00:18:51.485 "trtype": "TCP", 00:18:51.485 "adrfam": "IPv4", 00:18:51.485 "traddr": "10.0.0.1", 00:18:51.485 "trsvcid": "58830" 00:18:51.485 }, 00:18:51.485 "auth": { 00:18:51.485 "state": "completed", 00:18:51.485 "digest": "sha384", 00:18:51.485 "dhgroup": "ffdhe6144" 00:18:51.485 } 00:18:51.485 } 00:18:51.485 ]' 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.485 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.743 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:51.743 08:55:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:18:52.677 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.677 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:52.677 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.677 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.677 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.677 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.677 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.677 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.677 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.936 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.869 00:18:53.869 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.869 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.869 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.127 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.127 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.127 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.127 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.127 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.127 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.127 { 00:18:54.127 "cntlid": 89, 00:18:54.127 "qid": 0, 00:18:54.127 "state": "enabled", 00:18:54.127 "thread": "nvmf_tgt_poll_group_000", 00:18:54.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:54.127 "listen_address": { 00:18:54.127 "trtype": "TCP", 00:18:54.127 "adrfam": "IPv4", 00:18:54.127 "traddr": "10.0.0.2", 00:18:54.127 "trsvcid": "4420" 00:18:54.127 }, 00:18:54.127 "peer_address": { 00:18:54.127 "trtype": "TCP", 00:18:54.127 "adrfam": "IPv4", 00:18:54.127 "traddr": "10.0.0.1", 00:18:54.127 "trsvcid": "44378" 00:18:54.127 }, 00:18:54.127 "auth": { 00:18:54.127 "state": "completed", 00:18:54.127 "digest": "sha384", 00:18:54.127 "dhgroup": "ffdhe8192" 00:18:54.127 } 00:18:54.127 } 00:18:54.127 ]' 00:18:54.127 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.127 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.127 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.385 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.385 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.385 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.385 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.385 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.642 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:54.642 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:18:55.575 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.575 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:55.575 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.575 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.575 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.575 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.575 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.575 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.833 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.766 00:18:56.766 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.766 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.766 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.024 { 00:18:57.024 "cntlid": 91, 00:18:57.024 "qid": 0, 00:18:57.024 "state": "enabled", 00:18:57.024 "thread": "nvmf_tgt_poll_group_000", 00:18:57.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:57.024 "listen_address": { 00:18:57.024 "trtype": "TCP", 00:18:57.024 "adrfam": "IPv4", 00:18:57.024 "traddr": "10.0.0.2", 00:18:57.024 "trsvcid": "4420" 00:18:57.024 }, 00:18:57.024 "peer_address": { 00:18:57.024 "trtype": "TCP", 00:18:57.024 "adrfam": "IPv4", 00:18:57.024 "traddr": "10.0.0.1", 00:18:57.024 "trsvcid": "44404" 00:18:57.024 }, 00:18:57.024 "auth": { 00:18:57.024 "state": "completed", 00:18:57.024 "digest": "sha384", 00:18:57.024 "dhgroup": "ffdhe8192" 00:18:57.024 } 00:18:57.024 } 00:18:57.024 ]' 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.024 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.281 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.281 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.281 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.539 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:57.539 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:18:58.472 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.472 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:58.472 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.472 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.472 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.472 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.472 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.472 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.729 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.662 00:18:59.662 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.662 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.662 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.919 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.919 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.919 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.919 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.919 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.919 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.919 { 00:18:59.919 "cntlid": 93, 00:18:59.919 "qid": 0, 00:18:59.919 "state": "enabled", 00:18:59.919 "thread": "nvmf_tgt_poll_group_000", 00:18:59.919 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:59.919 "listen_address": { 00:18:59.919 "trtype": "TCP", 00:18:59.919 "adrfam": "IPv4", 00:18:59.919 "traddr": "10.0.0.2", 00:18:59.919 "trsvcid": "4420" 00:18:59.919 }, 00:18:59.919 "peer_address": { 00:18:59.919 "trtype": "TCP", 00:18:59.919 "adrfam": "IPv4", 00:18:59.919 "traddr": "10.0.0.1", 00:18:59.919 "trsvcid": "44414" 00:18:59.919 }, 00:18:59.919 "auth": { 00:18:59.919 "state": "completed", 00:18:59.919 "digest": "sha384", 00:18:59.919 "dhgroup": "ffdhe8192" 00:18:59.919 } 00:18:59.919 } 00:18:59.919 ]' 00:18:59.919 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.919 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.919 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.919 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.919 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.919 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.919 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.919 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.177 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:00.177 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:01.110 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.110 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:01.110 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.110 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.110 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.110 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.110 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.110 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.368 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.301 00:19:02.301 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.301 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.301 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.559 { 00:19:02.559 "cntlid": 95, 00:19:02.559 "qid": 0, 00:19:02.559 "state": "enabled", 00:19:02.559 "thread": "nvmf_tgt_poll_group_000", 00:19:02.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:02.559 "listen_address": { 00:19:02.559 "trtype": "TCP", 00:19:02.559 "adrfam": "IPv4", 00:19:02.559 "traddr": "10.0.0.2", 00:19:02.559 "trsvcid": "4420" 00:19:02.559 }, 00:19:02.559 "peer_address": { 00:19:02.559 "trtype": "TCP", 00:19:02.559 "adrfam": "IPv4", 00:19:02.559 "traddr": "10.0.0.1", 00:19:02.559 "trsvcid": "44436" 00:19:02.559 }, 00:19:02.559 "auth": { 00:19:02.559 "state": "completed", 00:19:02.559 "digest": "sha384", 00:19:02.559 "dhgroup": "ffdhe8192" 00:19:02.559 } 00:19:02.559 } 00:19:02.559 ]' 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.559 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.817 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.817 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.817 08:55:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.074 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:03.074 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.018 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.276 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.533 00:19:04.533 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.533 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.533 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.791 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.791 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.791 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.791 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.791 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.792 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.792 { 00:19:04.792 "cntlid": 97, 00:19:04.792 "qid": 0, 00:19:04.792 "state": "enabled", 00:19:04.792 "thread": "nvmf_tgt_poll_group_000", 00:19:04.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:04.792 "listen_address": { 00:19:04.792 "trtype": "TCP", 00:19:04.792 "adrfam": "IPv4", 00:19:04.792 "traddr": "10.0.0.2", 00:19:04.792 "trsvcid": "4420" 00:19:04.792 }, 00:19:04.792 "peer_address": { 00:19:04.792 "trtype": "TCP", 00:19:04.792 "adrfam": "IPv4", 00:19:04.792 "traddr": "10.0.0.1", 00:19:04.792 "trsvcid": "58730" 00:19:04.792 }, 00:19:04.792 "auth": { 00:19:04.792 "state": "completed", 00:19:04.792 "digest": "sha512", 00:19:04.792 "dhgroup": "null" 00:19:04.792 } 00:19:04.792 } 00:19:04.792 ]' 00:19:04.792 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.792 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.792 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.792 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:04.792 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.049 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.049 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.049 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.306 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:05.307 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:06.239 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.239 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:06.239 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.239 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.239 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.239 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:06.239 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.239 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.499 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.757 00:19:06.757 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.757 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.757 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.013 { 00:19:07.013 "cntlid": 99, 00:19:07.013 "qid": 0, 00:19:07.013 "state": "enabled", 00:19:07.013 "thread": "nvmf_tgt_poll_group_000", 00:19:07.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:07.013 "listen_address": { 00:19:07.013 "trtype": "TCP", 00:19:07.013 "adrfam": "IPv4", 00:19:07.013 "traddr": "10.0.0.2", 00:19:07.013 "trsvcid": "4420" 00:19:07.013 }, 00:19:07.013 "peer_address": { 00:19:07.013 "trtype": "TCP", 00:19:07.013 "adrfam": "IPv4", 00:19:07.013 "traddr": "10.0.0.1", 00:19:07.013 "trsvcid": "58746" 00:19:07.013 }, 00:19:07.013 "auth": { 00:19:07.013 "state": "completed", 00:19:07.013 "digest": "sha512", 00:19:07.013 "dhgroup": "null" 00:19:07.013 } 00:19:07.013 } 00:19:07.013 ]' 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.013 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.577 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:07.577 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.511 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.075 00:19:09.075 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.075 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.075 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.333 { 00:19:09.333 "cntlid": 101, 00:19:09.333 "qid": 0, 00:19:09.333 "state": "enabled", 00:19:09.333 "thread": "nvmf_tgt_poll_group_000", 00:19:09.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:09.333 "listen_address": { 00:19:09.333 "trtype": "TCP", 00:19:09.333 "adrfam": "IPv4", 00:19:09.333 "traddr": "10.0.0.2", 00:19:09.333 "trsvcid": "4420" 00:19:09.333 }, 00:19:09.333 "peer_address": { 00:19:09.333 "trtype": "TCP", 00:19:09.333 "adrfam": "IPv4", 00:19:09.333 "traddr": "10.0.0.1", 00:19:09.333 "trsvcid": "58772" 00:19:09.333 }, 00:19:09.333 "auth": { 00:19:09.333 "state": "completed", 00:19:09.333 "digest": "sha512", 00:19:09.333 "dhgroup": "null" 00:19:09.333 } 00:19:09.333 } 00:19:09.333 ]' 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.333 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.591 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:09.591 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:10.524 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.524 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:10.524 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.524 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.524 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.524 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.524 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.524 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:10.782 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:10.782 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.782 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.782 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:10.782 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:10.782 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.782 08:55:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:10.782 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.782 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.782 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.782 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:10.782 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.782 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.347 00:19:11.347 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.347 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.347 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.347 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.347 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.347 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.347 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.605 { 00:19:11.605 "cntlid": 103, 00:19:11.605 "qid": 0, 00:19:11.605 "state": "enabled", 00:19:11.605 "thread": "nvmf_tgt_poll_group_000", 00:19:11.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:11.605 "listen_address": { 00:19:11.605 "trtype": "TCP", 00:19:11.605 "adrfam": "IPv4", 00:19:11.605 "traddr": "10.0.0.2", 00:19:11.605 "trsvcid": "4420" 00:19:11.605 }, 00:19:11.605 "peer_address": { 00:19:11.605 "trtype": "TCP", 00:19:11.605 "adrfam": "IPv4", 00:19:11.605 "traddr": "10.0.0.1", 00:19:11.605 "trsvcid": "58782" 00:19:11.605 }, 00:19:11.605 "auth": { 00:19:11.605 "state": "completed", 00:19:11.605 "digest": "sha512", 00:19:11.605 "dhgroup": "null" 00:19:11.605 } 00:19:11.605 } 00:19:11.605 ]' 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.605 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.862 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:11.862 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:12.831 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.831 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.831 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.831 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.831 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.831 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.831 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.831 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:12.831 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.113 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:13.370 00:19:13.370 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.370 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.370 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.628 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.628 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.628 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.628 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.886 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.886 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.886 { 00:19:13.886 "cntlid": 105, 00:19:13.886 "qid": 0, 00:19:13.886 "state": "enabled", 00:19:13.886 "thread": "nvmf_tgt_poll_group_000", 00:19:13.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:13.886 "listen_address": { 00:19:13.886 "trtype": "TCP", 00:19:13.886 "adrfam": "IPv4", 00:19:13.886 "traddr": "10.0.0.2", 00:19:13.886 "trsvcid": "4420" 00:19:13.886 }, 00:19:13.886 "peer_address": { 00:19:13.886 "trtype": "TCP", 00:19:13.886 "adrfam": "IPv4", 00:19:13.886 "traddr": "10.0.0.1", 00:19:13.886 "trsvcid": "53350" 00:19:13.886 }, 00:19:13.886 "auth": { 00:19:13.886 "state": "completed", 00:19:13.886 "digest": "sha512", 00:19:13.886 "dhgroup": "ffdhe2048" 00:19:13.886 } 00:19:13.886 } 00:19:13.886 ]' 00:19:13.886 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.886 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.886 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.886 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.886 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.886 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.886 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.886 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.145 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:14.145 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:15.078 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.078 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:15.078 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.078 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.078 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.078 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.078 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.078 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.335 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.900 00:19:15.900 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.900 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.900 08:55:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.158 { 00:19:16.158 "cntlid": 107, 00:19:16.158 "qid": 0, 00:19:16.158 "state": "enabled", 00:19:16.158 "thread": "nvmf_tgt_poll_group_000", 00:19:16.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:16.158 "listen_address": { 00:19:16.158 "trtype": "TCP", 00:19:16.158 "adrfam": "IPv4", 00:19:16.158 "traddr": "10.0.0.2", 00:19:16.158 "trsvcid": "4420" 00:19:16.158 }, 00:19:16.158 "peer_address": { 00:19:16.158 "trtype": "TCP", 00:19:16.158 "adrfam": "IPv4", 00:19:16.158 "traddr": "10.0.0.1", 00:19:16.158 "trsvcid": "53384" 00:19:16.158 }, 00:19:16.158 "auth": { 00:19:16.158 "state": "completed", 00:19:16.158 "digest": "sha512", 00:19:16.158 "dhgroup": "ffdhe2048" 00:19:16.158 } 00:19:16.158 } 00:19:16.158 ]' 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.158 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.415 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:16.415 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:17.347 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.347 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:17.347 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.347 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.347 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.347 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:17.347 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.347 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.605 08:55:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.171 00:19:18.171 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:18.171 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:18.171 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:18.429 { 00:19:18.429 "cntlid": 109, 00:19:18.429 "qid": 0, 00:19:18.429 "state": "enabled", 00:19:18.429 "thread": "nvmf_tgt_poll_group_000", 00:19:18.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:18.429 "listen_address": { 00:19:18.429 "trtype": "TCP", 00:19:18.429 "adrfam": "IPv4", 00:19:18.429 "traddr": "10.0.0.2", 00:19:18.429 "trsvcid": "4420" 00:19:18.429 }, 00:19:18.429 "peer_address": { 00:19:18.429 "trtype": "TCP", 00:19:18.429 "adrfam": "IPv4", 00:19:18.429 "traddr": "10.0.0.1", 00:19:18.429 "trsvcid": "53424" 00:19:18.429 }, 00:19:18.429 "auth": { 00:19:18.429 "state": "completed", 00:19:18.429 "digest": "sha512", 00:19:18.429 "dhgroup": "ffdhe2048" 00:19:18.429 } 00:19:18.429 } 00:19:18.429 ]' 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.429 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.687 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:18.687 08:55:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:19.621 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.621 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:19.621 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.621 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.621 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.621 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.621 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:19.621 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.186 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:20.443 00:19:20.443 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:20.443 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:20.443 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:20.702 { 00:19:20.702 "cntlid": 111, 00:19:20.702 "qid": 0, 00:19:20.702 "state": "enabled", 00:19:20.702 "thread": "nvmf_tgt_poll_group_000", 00:19:20.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:20.702 "listen_address": { 00:19:20.702 "trtype": "TCP", 00:19:20.702 "adrfam": "IPv4", 00:19:20.702 "traddr": "10.0.0.2", 00:19:20.702 "trsvcid": "4420" 00:19:20.702 }, 00:19:20.702 "peer_address": { 00:19:20.702 "trtype": "TCP", 00:19:20.702 "adrfam": "IPv4", 00:19:20.702 "traddr": "10.0.0.1", 00:19:20.702 "trsvcid": "53444" 00:19:20.702 }, 00:19:20.702 "auth": { 00:19:20.702 "state": "completed", 00:19:20.702 "digest": "sha512", 00:19:20.702 "dhgroup": "ffdhe2048" 00:19:20.702 } 00:19:20.702 } 00:19:20.702 ]' 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.702 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.268 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:21.268 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.201 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.202 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.202 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.202 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.767 00:19:22.767 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.767 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.767 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.025 { 00:19:23.025 "cntlid": 113, 00:19:23.025 "qid": 0, 00:19:23.025 "state": "enabled", 00:19:23.025 "thread": "nvmf_tgt_poll_group_000", 00:19:23.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:23.025 "listen_address": { 00:19:23.025 "trtype": "TCP", 00:19:23.025 "adrfam": "IPv4", 00:19:23.025 "traddr": "10.0.0.2", 00:19:23.025 "trsvcid": "4420" 00:19:23.025 }, 00:19:23.025 "peer_address": { 00:19:23.025 "trtype": "TCP", 00:19:23.025 "adrfam": "IPv4", 00:19:23.025 "traddr": "10.0.0.1", 00:19:23.025 "trsvcid": "53476" 00:19:23.025 }, 00:19:23.025 "auth": { 00:19:23.025 "state": "completed", 00:19:23.025 "digest": "sha512", 00:19:23.025 "dhgroup": "ffdhe3072" 00:19:23.025 } 00:19:23.025 } 00:19:23.025 ]' 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.025 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.283 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:23.283 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:24.216 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.216 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:24.216 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.216 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.216 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.216 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:24.216 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.216 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.473 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.039 00:19:25.039 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.039 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.039 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.296 { 00:19:25.296 "cntlid": 115, 00:19:25.296 "qid": 0, 00:19:25.296 "state": "enabled", 00:19:25.296 "thread": "nvmf_tgt_poll_group_000", 00:19:25.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:25.296 "listen_address": { 00:19:25.296 "trtype": "TCP", 00:19:25.296 "adrfam": "IPv4", 00:19:25.296 "traddr": "10.0.0.2", 00:19:25.296 "trsvcid": "4420" 00:19:25.296 }, 00:19:25.296 "peer_address": { 00:19:25.296 "trtype": "TCP", 00:19:25.296 "adrfam": "IPv4", 00:19:25.296 "traddr": "10.0.0.1", 00:19:25.296 "trsvcid": "40786" 00:19:25.296 }, 00:19:25.296 "auth": { 00:19:25.296 "state": "completed", 00:19:25.296 "digest": "sha512", 00:19:25.296 "dhgroup": "ffdhe3072" 00:19:25.296 } 00:19:25.296 } 00:19:25.296 ]' 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.296 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.554 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:25.554 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:26.487 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.487 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:26.487 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.487 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.487 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.487 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.487 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.487 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.746 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.311 00:19:27.311 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.311 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.311 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.570 { 00:19:27.570 "cntlid": 117, 00:19:27.570 "qid": 0, 00:19:27.570 "state": "enabled", 00:19:27.570 "thread": "nvmf_tgt_poll_group_000", 00:19:27.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:27.570 "listen_address": { 00:19:27.570 "trtype": "TCP", 00:19:27.570 "adrfam": "IPv4", 00:19:27.570 "traddr": "10.0.0.2", 00:19:27.570 "trsvcid": "4420" 00:19:27.570 }, 00:19:27.570 "peer_address": { 00:19:27.570 "trtype": "TCP", 00:19:27.570 "adrfam": "IPv4", 00:19:27.570 "traddr": "10.0.0.1", 00:19:27.570 "trsvcid": "40818" 00:19:27.570 }, 00:19:27.570 "auth": { 00:19:27.570 "state": "completed", 00:19:27.570 "digest": "sha512", 00:19:27.570 "dhgroup": "ffdhe3072" 00:19:27.570 } 00:19:27.570 } 00:19:27.570 ]' 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.570 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.827 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:27.828 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:28.760 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.760 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:28.760 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.760 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.760 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.760 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.760 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:28.760 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.018 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:29.582 00:19:29.582 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.582 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.583 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.583 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.583 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.583 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.583 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.583 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.583 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.583 { 00:19:29.583 "cntlid": 119, 00:19:29.583 "qid": 0, 00:19:29.583 "state": "enabled", 00:19:29.583 "thread": "nvmf_tgt_poll_group_000", 00:19:29.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:29.583 "listen_address": { 00:19:29.583 "trtype": "TCP", 00:19:29.583 "adrfam": "IPv4", 00:19:29.583 "traddr": "10.0.0.2", 00:19:29.583 "trsvcid": "4420" 00:19:29.583 }, 00:19:29.583 "peer_address": { 00:19:29.583 "trtype": "TCP", 00:19:29.583 "adrfam": "IPv4", 00:19:29.583 "traddr": "10.0.0.1", 00:19:29.583 "trsvcid": "40828" 00:19:29.583 }, 00:19:29.583 "auth": { 00:19:29.583 "state": "completed", 00:19:29.583 "digest": "sha512", 00:19:29.583 "dhgroup": "ffdhe3072" 00:19:29.583 } 00:19:29.583 } 00:19:29.583 ]' 00:19:29.583 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.840 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.840 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.840 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:29.840 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.840 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.840 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.840 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.098 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:30.098 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:31.031 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.031 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:31.031 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.031 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.031 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.031 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.031 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.031 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.031 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.289 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.853 00:19:31.853 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.853 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.853 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.853 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.854 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.854 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.854 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.854 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.854 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:31.854 { 00:19:31.854 "cntlid": 121, 00:19:31.854 "qid": 0, 00:19:31.854 "state": "enabled", 00:19:31.854 "thread": "nvmf_tgt_poll_group_000", 00:19:31.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:31.854 "listen_address": { 00:19:31.854 "trtype": "TCP", 00:19:31.854 "adrfam": "IPv4", 00:19:31.854 "traddr": "10.0.0.2", 00:19:31.854 "trsvcid": "4420" 00:19:31.854 }, 00:19:31.854 "peer_address": { 00:19:31.854 "trtype": "TCP", 00:19:31.854 "adrfam": "IPv4", 00:19:31.854 "traddr": "10.0.0.1", 00:19:31.854 "trsvcid": "40856" 00:19:31.854 }, 00:19:31.854 "auth": { 00:19:31.854 "state": "completed", 00:19:31.854 "digest": "sha512", 00:19:31.854 "dhgroup": "ffdhe4096" 00:19:31.854 } 00:19:31.854 } 00:19:31.854 ]' 00:19:31.854 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.111 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.111 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.111 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:32.111 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.111 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.111 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.111 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.369 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:32.369 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:33.302 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.302 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:33.302 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.302 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.302 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.302 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.302 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:33.302 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.560 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.125 00:19:34.125 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.125 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.125 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.125 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.125 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.125 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.125 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.125 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.125 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.125 { 00:19:34.125 "cntlid": 123, 00:19:34.125 "qid": 0, 00:19:34.125 "state": "enabled", 00:19:34.125 "thread": "nvmf_tgt_poll_group_000", 00:19:34.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:34.125 "listen_address": { 00:19:34.125 "trtype": "TCP", 00:19:34.125 "adrfam": "IPv4", 00:19:34.125 "traddr": "10.0.0.2", 00:19:34.125 "trsvcid": "4420" 00:19:34.125 }, 00:19:34.125 "peer_address": { 00:19:34.125 "trtype": "TCP", 00:19:34.125 "adrfam": "IPv4", 00:19:34.125 "traddr": "10.0.0.1", 00:19:34.125 "trsvcid": "39562" 00:19:34.125 }, 00:19:34.125 "auth": { 00:19:34.125 "state": "completed", 00:19:34.125 "digest": "sha512", 00:19:34.125 "dhgroup": "ffdhe4096" 00:19:34.125 } 00:19:34.125 } 00:19:34.125 ]' 00:19:34.383 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.383 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.383 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.383 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:34.383 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.383 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.383 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.383 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.641 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:34.641 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:35.575 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.575 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.575 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.575 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.575 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.575 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.575 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.575 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.833 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.398 00:19:36.398 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.398 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.398 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.656 { 00:19:36.656 "cntlid": 125, 00:19:36.656 "qid": 0, 00:19:36.656 "state": "enabled", 00:19:36.656 "thread": "nvmf_tgt_poll_group_000", 00:19:36.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:36.656 "listen_address": { 00:19:36.656 "trtype": "TCP", 00:19:36.656 "adrfam": "IPv4", 00:19:36.656 "traddr": "10.0.0.2", 00:19:36.656 "trsvcid": "4420" 00:19:36.656 }, 00:19:36.656 "peer_address": { 00:19:36.656 "trtype": "TCP", 00:19:36.656 "adrfam": "IPv4", 00:19:36.656 "traddr": "10.0.0.1", 00:19:36.656 "trsvcid": "39580" 00:19:36.656 }, 00:19:36.656 "auth": { 00:19:36.656 "state": "completed", 00:19:36.656 "digest": "sha512", 00:19:36.656 "dhgroup": "ffdhe4096" 00:19:36.656 } 00:19:36.656 } 00:19:36.656 ]' 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.656 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.913 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:36.913 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:37.847 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.847 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:37.847 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.847 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.847 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.847 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.847 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.847 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.105 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:38.670 00:19:38.670 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.670 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.670 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.928 { 00:19:38.928 "cntlid": 127, 00:19:38.928 "qid": 0, 00:19:38.928 "state": "enabled", 00:19:38.928 "thread": "nvmf_tgt_poll_group_000", 00:19:38.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:38.928 "listen_address": { 00:19:38.928 "trtype": "TCP", 00:19:38.928 "adrfam": "IPv4", 00:19:38.928 "traddr": "10.0.0.2", 00:19:38.928 "trsvcid": "4420" 00:19:38.928 }, 00:19:38.928 "peer_address": { 00:19:38.928 "trtype": "TCP", 00:19:38.928 "adrfam": "IPv4", 00:19:38.928 "traddr": "10.0.0.1", 00:19:38.928 "trsvcid": "39614" 00:19:38.928 }, 00:19:38.928 "auth": { 00:19:38.928 "state": "completed", 00:19:38.928 "digest": "sha512", 00:19:38.928 "dhgroup": "ffdhe4096" 00:19:38.928 } 00:19:38.928 } 00:19:38.928 ]' 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.928 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.186 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:39.186 08:55:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:40.119 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.119 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:40.119 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.119 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.119 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.119 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.119 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:40.119 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.119 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.683 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.684 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.942 00:19:41.199 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.199 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.199 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:41.457 { 00:19:41.457 "cntlid": 129, 00:19:41.457 "qid": 0, 00:19:41.457 "state": "enabled", 00:19:41.457 "thread": "nvmf_tgt_poll_group_000", 00:19:41.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:41.457 "listen_address": { 00:19:41.457 "trtype": "TCP", 00:19:41.457 "adrfam": "IPv4", 00:19:41.457 "traddr": "10.0.0.2", 00:19:41.457 "trsvcid": "4420" 00:19:41.457 }, 00:19:41.457 "peer_address": { 00:19:41.457 "trtype": "TCP", 00:19:41.457 "adrfam": "IPv4", 00:19:41.457 "traddr": "10.0.0.1", 00:19:41.457 "trsvcid": "39640" 00:19:41.457 }, 00:19:41.457 "auth": { 00:19:41.457 "state": "completed", 00:19:41.457 "digest": "sha512", 00:19:41.457 "dhgroup": "ffdhe6144" 00:19:41.457 } 00:19:41.457 } 00:19:41.457 ]' 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.457 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.715 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:41.715 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:42.677 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.677 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:42.677 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.677 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.677 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.677 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:42.677 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.677 08:55:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.959 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.524 00:19:43.524 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.524 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.524 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.782 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.782 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.782 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.782 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.782 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.782 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.782 { 00:19:43.782 "cntlid": 131, 00:19:43.782 "qid": 0, 00:19:43.782 "state": "enabled", 00:19:43.782 "thread": "nvmf_tgt_poll_group_000", 00:19:43.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:43.782 "listen_address": { 00:19:43.782 "trtype": "TCP", 00:19:43.782 "adrfam": "IPv4", 00:19:43.782 "traddr": "10.0.0.2", 00:19:43.782 "trsvcid": "4420" 00:19:43.782 }, 00:19:43.782 "peer_address": { 00:19:43.782 "trtype": "TCP", 00:19:43.782 "adrfam": "IPv4", 00:19:43.782 "traddr": "10.0.0.1", 00:19:43.782 "trsvcid": "59584" 00:19:43.782 }, 00:19:43.782 "auth": { 00:19:43.782 "state": "completed", 00:19:43.782 "digest": "sha512", 00:19:43.783 "dhgroup": "ffdhe6144" 00:19:43.783 } 00:19:43.783 } 00:19:43.783 ]' 00:19:43.783 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.783 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.783 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.783 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:43.783 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.783 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.783 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.783 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.348 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:44.348 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.282 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.847 00:19:45.847 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.847 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.847 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.104 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.104 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.104 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.104 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.104 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.104 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.104 { 00:19:46.104 "cntlid": 133, 00:19:46.104 "qid": 0, 00:19:46.104 "state": "enabled", 00:19:46.104 "thread": "nvmf_tgt_poll_group_000", 00:19:46.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:46.104 "listen_address": { 00:19:46.104 "trtype": "TCP", 00:19:46.104 "adrfam": "IPv4", 00:19:46.104 "traddr": "10.0.0.2", 00:19:46.104 "trsvcid": "4420" 00:19:46.104 }, 00:19:46.104 "peer_address": { 00:19:46.104 "trtype": "TCP", 00:19:46.104 "adrfam": "IPv4", 00:19:46.104 "traddr": "10.0.0.1", 00:19:46.104 "trsvcid": "59618" 00:19:46.104 }, 00:19:46.104 "auth": { 00:19:46.104 "state": "completed", 00:19:46.104 "digest": "sha512", 00:19:46.104 "dhgroup": "ffdhe6144" 00:19:46.104 } 00:19:46.104 } 00:19:46.104 ]' 00:19:46.104 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.104 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.104 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.362 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:46.362 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.362 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.362 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.362 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.619 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:46.619 08:55:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:47.552 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.552 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:47.552 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.552 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.552 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.552 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.552 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:47.552 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:47.810 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.811 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:48.376 00:19:48.376 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.376 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.376 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.634 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.634 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.634 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.634 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.634 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.634 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.634 { 00:19:48.634 "cntlid": 135, 00:19:48.634 "qid": 0, 00:19:48.634 "state": "enabled", 00:19:48.634 "thread": "nvmf_tgt_poll_group_000", 00:19:48.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:48.634 "listen_address": { 00:19:48.634 "trtype": "TCP", 00:19:48.634 "adrfam": "IPv4", 00:19:48.634 "traddr": "10.0.0.2", 00:19:48.634 "trsvcid": "4420" 00:19:48.634 }, 00:19:48.634 "peer_address": { 00:19:48.634 "trtype": "TCP", 00:19:48.634 "adrfam": "IPv4", 00:19:48.634 "traddr": "10.0.0.1", 00:19:48.634 "trsvcid": "59644" 00:19:48.634 }, 00:19:48.634 "auth": { 00:19:48.634 "state": "completed", 00:19:48.634 "digest": "sha512", 00:19:48.634 "dhgroup": "ffdhe6144" 00:19:48.634 } 00:19:48.634 } 00:19:48.634 ]' 00:19:48.634 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.892 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.892 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.892 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:48.892 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.892 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.892 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.892 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.151 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:49.151 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:19:50.084 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.084 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.084 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.084 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.084 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.084 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.084 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.084 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.084 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:50.341 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:50.341 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.341 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:50.341 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:50.341 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:50.341 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.341 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.341 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.341 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.342 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.342 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.342 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.342 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.276 00:19:51.276 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.276 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.276 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.534 { 00:19:51.534 "cntlid": 137, 00:19:51.534 "qid": 0, 00:19:51.534 "state": "enabled", 00:19:51.534 "thread": "nvmf_tgt_poll_group_000", 00:19:51.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:51.534 "listen_address": { 00:19:51.534 "trtype": "TCP", 00:19:51.534 "adrfam": "IPv4", 00:19:51.534 "traddr": "10.0.0.2", 00:19:51.534 "trsvcid": "4420" 00:19:51.534 }, 00:19:51.534 "peer_address": { 00:19:51.534 "trtype": "TCP", 00:19:51.534 "adrfam": "IPv4", 00:19:51.534 "traddr": "10.0.0.1", 00:19:51.534 "trsvcid": "59660" 00:19:51.534 }, 00:19:51.534 "auth": { 00:19:51.534 "state": "completed", 00:19:51.534 "digest": "sha512", 00:19:51.534 "dhgroup": "ffdhe8192" 00:19:51.534 } 00:19:51.534 } 00:19:51.534 ]' 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.534 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.792 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:51.792 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:19:52.725 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.725 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:52.725 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.725 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.725 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.725 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.725 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:52.725 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.983 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.917 00:19:53.917 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.917 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.917 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.175 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.175 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.175 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.175 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.175 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.175 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.175 { 00:19:54.175 "cntlid": 139, 00:19:54.175 "qid": 0, 00:19:54.175 "state": "enabled", 00:19:54.175 "thread": "nvmf_tgt_poll_group_000", 00:19:54.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:54.175 "listen_address": { 00:19:54.175 "trtype": "TCP", 00:19:54.175 "adrfam": "IPv4", 00:19:54.175 "traddr": "10.0.0.2", 00:19:54.175 "trsvcid": "4420" 00:19:54.175 }, 00:19:54.175 "peer_address": { 00:19:54.175 "trtype": "TCP", 00:19:54.175 "adrfam": "IPv4", 00:19:54.175 "traddr": "10.0.0.1", 00:19:54.175 "trsvcid": "50606" 00:19:54.175 }, 00:19:54.175 "auth": { 00:19:54.175 "state": "completed", 00:19:54.175 "digest": "sha512", 00:19:54.175 "dhgroup": "ffdhe8192" 00:19:54.175 } 00:19:54.175 } 00:19:54.175 ]' 00:19:54.175 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.175 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.175 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.433 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:54.433 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.433 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.433 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.433 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.691 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:54.691 08:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: --dhchap-ctrl-secret DHHC-1:02:NjAwYjA3YmM3N2JiZDY3YWY3MjYxYWUyMTBjOGI4ODU0YTA4YTlhMzJiOWFiNjgzoozGqg==: 00:19:55.626 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.626 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:55.626 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.626 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.626 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.626 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.626 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.626 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.884 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:55.884 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.884 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:55.884 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:55.884 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:55.884 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.884 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.884 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.884 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.884 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.884 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.884 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.884 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.819 00:19:56.819 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.819 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.819 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.076 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.076 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.076 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.076 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.076 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.076 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.076 { 00:19:57.077 "cntlid": 141, 00:19:57.077 "qid": 0, 00:19:57.077 "state": "enabled", 00:19:57.077 "thread": "nvmf_tgt_poll_group_000", 00:19:57.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:57.077 "listen_address": { 00:19:57.077 "trtype": "TCP", 00:19:57.077 "adrfam": "IPv4", 00:19:57.077 "traddr": "10.0.0.2", 00:19:57.077 "trsvcid": "4420" 00:19:57.077 }, 00:19:57.077 "peer_address": { 00:19:57.077 "trtype": "TCP", 00:19:57.077 "adrfam": "IPv4", 00:19:57.077 "traddr": "10.0.0.1", 00:19:57.077 "trsvcid": "50636" 00:19:57.077 }, 00:19:57.077 "auth": { 00:19:57.077 "state": "completed", 00:19:57.077 "digest": "sha512", 00:19:57.077 "dhgroup": "ffdhe8192" 00:19:57.077 } 00:19:57.077 } 00:19:57.077 ]' 00:19:57.077 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.077 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.077 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.077 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:57.077 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.077 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.077 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.077 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.642 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:57.642 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:01:NWFjOWYzZjQyMWI1ODA3MjczY2JlNDkxNDE4NTc5MmI7tlBs: 00:19:58.576 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.576 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:58.576 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.576 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.576 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.576 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.576 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.576 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.834 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.769 00:19:59.769 08:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.769 08:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.769 08:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.769 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.769 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.769 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.769 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.769 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.769 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.769 { 00:19:59.769 "cntlid": 143, 00:19:59.769 "qid": 0, 00:19:59.769 "state": "enabled", 00:19:59.769 "thread": "nvmf_tgt_poll_group_000", 00:19:59.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:59.769 "listen_address": { 00:19:59.769 "trtype": "TCP", 00:19:59.769 "adrfam": "IPv4", 00:19:59.769 "traddr": "10.0.0.2", 00:19:59.769 "trsvcid": "4420" 00:19:59.769 }, 00:19:59.769 "peer_address": { 00:19:59.769 "trtype": "TCP", 00:19:59.769 "adrfam": "IPv4", 00:19:59.769 "traddr": "10.0.0.1", 00:19:59.769 "trsvcid": "50670" 00:19:59.769 }, 00:19:59.769 "auth": { 00:19:59.769 "state": "completed", 00:19:59.769 "digest": "sha512", 00:19:59.769 "dhgroup": "ffdhe8192" 00:19:59.769 } 00:19:59.769 } 00:19:59.769 ]' 00:19:59.769 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.027 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.027 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.027 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.027 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.027 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.027 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.027 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.284 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:20:00.284 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:01.218 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.475 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.409 00:20:02.409 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.409 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.409 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.667 { 00:20:02.667 "cntlid": 145, 00:20:02.667 "qid": 0, 00:20:02.667 "state": "enabled", 00:20:02.667 "thread": "nvmf_tgt_poll_group_000", 00:20:02.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:02.667 "listen_address": { 00:20:02.667 "trtype": "TCP", 00:20:02.667 "adrfam": "IPv4", 00:20:02.667 "traddr": "10.0.0.2", 00:20:02.667 "trsvcid": "4420" 00:20:02.667 }, 00:20:02.667 "peer_address": { 00:20:02.667 "trtype": "TCP", 00:20:02.667 "adrfam": "IPv4", 00:20:02.667 "traddr": "10.0.0.1", 00:20:02.667 "trsvcid": "50696" 00:20:02.667 }, 00:20:02.667 "auth": { 00:20:02.667 "state": "completed", 00:20:02.667 "digest": "sha512", 00:20:02.667 "dhgroup": "ffdhe8192" 00:20:02.667 } 00:20:02.667 } 00:20:02.667 ]' 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.667 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.925 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:20:02.925 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzEzMWFlYjNlNTQ5MzMzMmIyZWFhNjlkYmY4MDA3MzY5ZWRlYWI1ZGYzZDFjMGJjHX2zbQ==: --dhchap-ctrl-secret DHHC-1:03:Yjc1MGQ5NDIxMGFiMDlkNjc0ZjgzZGNhM2JhMTdmY2VkNTE3YjhiMWQ4YTVhY2RkM2Q0ZWMwYjEyZjkwYzVhNeeiEIg=: 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:03.859 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:04.793 request: 00:20:04.793 { 00:20:04.793 "name": "nvme0", 00:20:04.793 "trtype": "tcp", 00:20:04.793 "traddr": "10.0.0.2", 00:20:04.793 "adrfam": "ipv4", 00:20:04.793 "trsvcid": "4420", 00:20:04.793 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:04.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:04.793 "prchk_reftag": false, 00:20:04.793 "prchk_guard": false, 00:20:04.793 "hdgst": false, 00:20:04.793 "ddgst": false, 00:20:04.793 "dhchap_key": "key2", 00:20:04.793 "allow_unrecognized_csi": false, 00:20:04.793 "method": "bdev_nvme_attach_controller", 00:20:04.793 "req_id": 1 00:20:04.793 } 00:20:04.793 Got JSON-RPC error response 00:20:04.793 response: 00:20:04.793 { 00:20:04.793 "code": -5, 00:20:04.793 "message": "Input/output error" 00:20:04.793 } 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:04.793 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.359 request: 00:20:05.359 { 00:20:05.359 "name": "nvme0", 00:20:05.359 "trtype": "tcp", 00:20:05.359 "traddr": "10.0.0.2", 00:20:05.359 "adrfam": "ipv4", 00:20:05.359 "trsvcid": "4420", 00:20:05.359 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:05.359 "prchk_reftag": false, 00:20:05.359 "prchk_guard": false, 00:20:05.359 "hdgst": false, 00:20:05.359 "ddgst": false, 00:20:05.359 "dhchap_key": "key1", 00:20:05.359 "dhchap_ctrlr_key": "ckey2", 00:20:05.359 "allow_unrecognized_csi": false, 00:20:05.359 "method": "bdev_nvme_attach_controller", 00:20:05.359 "req_id": 1 00:20:05.359 } 00:20:05.359 Got JSON-RPC error response 00:20:05.359 response: 00:20:05.359 { 00:20:05.359 "code": -5, 00:20:05.359 "message": "Input/output error" 00:20:05.359 } 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.359 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.617 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.183 request: 00:20:06.183 { 00:20:06.183 "name": "nvme0", 00:20:06.183 "trtype": "tcp", 00:20:06.183 "traddr": "10.0.0.2", 00:20:06.183 "adrfam": "ipv4", 00:20:06.183 "trsvcid": "4420", 00:20:06.183 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:06.183 "prchk_reftag": false, 00:20:06.183 "prchk_guard": false, 00:20:06.183 "hdgst": false, 00:20:06.183 "ddgst": false, 00:20:06.183 "dhchap_key": "key1", 00:20:06.183 "dhchap_ctrlr_key": "ckey1", 00:20:06.183 "allow_unrecognized_csi": false, 00:20:06.183 "method": "bdev_nvme_attach_controller", 00:20:06.183 "req_id": 1 00:20:06.183 } 00:20:06.183 Got JSON-RPC error response 00:20:06.183 response: 00:20:06.183 { 00:20:06.183 "code": -5, 00:20:06.183 "message": "Input/output error" 00:20:06.183 } 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 814538 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 814538 ']' 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 814538 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 814538 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 814538' 00:20:06.442 killing process with pid 814538 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 814538 00:20:06.442 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 814538 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=837357 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 837357 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 837357 ']' 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.701 08:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 837357 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 837357 ']' 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.959 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.218 null0 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.v0j 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.218 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.YGe ]] 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YGe 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.lfE 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.SxW ]] 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.SxW 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.LD9 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.EsH ]] 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EsH 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.XCz 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.477 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.478 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.851 nvme0n1 00:20:08.851 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.851 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.851 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.110 { 00:20:09.110 "cntlid": 1, 00:20:09.110 "qid": 0, 00:20:09.110 "state": "enabled", 00:20:09.110 "thread": "nvmf_tgt_poll_group_000", 00:20:09.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:09.110 "listen_address": { 00:20:09.110 "trtype": "TCP", 00:20:09.110 "adrfam": "IPv4", 00:20:09.110 "traddr": "10.0.0.2", 00:20:09.110 "trsvcid": "4420" 00:20:09.110 }, 00:20:09.110 "peer_address": { 00:20:09.110 "trtype": "TCP", 00:20:09.110 "adrfam": "IPv4", 00:20:09.110 "traddr": "10.0.0.1", 00:20:09.110 "trsvcid": "54556" 00:20:09.110 }, 00:20:09.110 "auth": { 00:20:09.110 "state": "completed", 00:20:09.110 "digest": "sha512", 00:20:09.110 "dhgroup": "ffdhe8192" 00:20:09.110 } 00:20:09.110 } 00:20:09.110 ]' 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.110 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.368 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:20:09.368 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:10.301 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.558 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.816 request: 00:20:10.816 { 00:20:10.816 "name": "nvme0", 00:20:10.816 "trtype": "tcp", 00:20:10.816 "traddr": "10.0.0.2", 00:20:10.816 "adrfam": "ipv4", 00:20:10.816 "trsvcid": "4420", 00:20:10.816 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:10.816 "prchk_reftag": false, 00:20:10.816 "prchk_guard": false, 00:20:10.816 "hdgst": false, 00:20:10.816 "ddgst": false, 00:20:10.816 "dhchap_key": "key3", 00:20:10.816 "allow_unrecognized_csi": false, 00:20:10.816 "method": "bdev_nvme_attach_controller", 00:20:10.816 "req_id": 1 00:20:10.816 } 00:20:10.816 Got JSON-RPC error response 00:20:10.816 response: 00:20:10.816 { 00:20:10.816 "code": -5, 00:20:10.816 "message": "Input/output error" 00:20:10.816 } 00:20:10.816 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:10.816 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:10.816 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:10.816 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:10.816 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:10.816 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:10.816 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:10.816 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.076 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.334 request: 00:20:11.334 { 00:20:11.334 "name": "nvme0", 00:20:11.334 "trtype": "tcp", 00:20:11.334 "traddr": "10.0.0.2", 00:20:11.334 "adrfam": "ipv4", 00:20:11.334 "trsvcid": "4420", 00:20:11.334 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:11.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:11.334 "prchk_reftag": false, 00:20:11.334 "prchk_guard": false, 00:20:11.334 "hdgst": false, 00:20:11.334 "ddgst": false, 00:20:11.334 "dhchap_key": "key3", 00:20:11.334 "allow_unrecognized_csi": false, 00:20:11.334 "method": "bdev_nvme_attach_controller", 00:20:11.334 "req_id": 1 00:20:11.334 } 00:20:11.334 Got JSON-RPC error response 00:20:11.334 response: 00:20:11.334 { 00:20:11.334 "code": -5, 00:20:11.334 "message": "Input/output error" 00:20:11.334 } 00:20:11.594 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:11.594 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:11.594 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:11.594 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:11.594 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:11.594 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:11.594 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:11.594 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:11.594 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:11.595 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.854 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:12.420 request: 00:20:12.420 { 00:20:12.420 "name": "nvme0", 00:20:12.420 "trtype": "tcp", 00:20:12.420 "traddr": "10.0.0.2", 00:20:12.420 "adrfam": "ipv4", 00:20:12.420 "trsvcid": "4420", 00:20:12.420 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:12.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:12.420 "prchk_reftag": false, 00:20:12.420 "prchk_guard": false, 00:20:12.420 "hdgst": false, 00:20:12.420 "ddgst": false, 00:20:12.420 "dhchap_key": "key0", 00:20:12.420 "dhchap_ctrlr_key": "key1", 00:20:12.420 "allow_unrecognized_csi": false, 00:20:12.420 "method": "bdev_nvme_attach_controller", 00:20:12.420 "req_id": 1 00:20:12.420 } 00:20:12.420 Got JSON-RPC error response 00:20:12.420 response: 00:20:12.420 { 00:20:12.420 "code": -5, 00:20:12.420 "message": "Input/output error" 00:20:12.420 } 00:20:12.420 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:12.420 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:12.420 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:12.420 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:12.420 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:12.420 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:12.420 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:12.678 nvme0n1 00:20:12.678 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:12.678 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:12.678 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.936 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.936 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.936 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.218 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:13.218 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.218 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.218 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.218 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:13.218 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:13.218 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:14.616 nvme0n1 00:20:14.616 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:14.616 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:14.616 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.874 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.874 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:14.874 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.874 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.874 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.874 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:14.874 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:14.874 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.132 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.132 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:20:15.132 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: --dhchap-ctrl-secret DHHC-1:03:ZDYwNmQxZTU0ZTYxNDFkNmExNzVlZGM4NDYxM2YwZDM1YWEwYjkzYThhNDA3MmU1MGFlNGRjODQ4OGM1NjU2NYecyUY=: 00:20:16.066 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:16.066 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:16.066 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:16.066 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:16.066 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:16.066 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:16.066 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:16.066 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.066 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:16.323 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:17.263 request: 00:20:17.263 { 00:20:17.263 "name": "nvme0", 00:20:17.263 "trtype": "tcp", 00:20:17.263 "traddr": "10.0.0.2", 00:20:17.263 "adrfam": "ipv4", 00:20:17.263 "trsvcid": "4420", 00:20:17.263 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:17.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:17.263 "prchk_reftag": false, 00:20:17.263 "prchk_guard": false, 00:20:17.263 "hdgst": false, 00:20:17.263 "ddgst": false, 00:20:17.263 "dhchap_key": "key1", 00:20:17.263 "allow_unrecognized_csi": false, 00:20:17.263 "method": "bdev_nvme_attach_controller", 00:20:17.263 "req_id": 1 00:20:17.263 } 00:20:17.263 Got JSON-RPC error response 00:20:17.263 response: 00:20:17.263 { 00:20:17.263 "code": -5, 00:20:17.263 "message": "Input/output error" 00:20:17.263 } 00:20:17.263 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:17.263 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:17.263 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:17.263 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:17.263 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:17.263 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:17.263 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:18.638 nvme0n1 00:20:18.638 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:18.638 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.638 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:18.896 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.896 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.896 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.154 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:19.154 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.154 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.154 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.154 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:19.154 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:19.154 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:19.412 nvme0n1 00:20:19.412 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:19.412 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:19.412 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.670 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.670 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.670 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.927 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:19.927 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.927 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.927 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.927 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: '' 2s 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: ]] 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NWYzYTRiODllYTA2ZjU2MzBmMDY2ZjAzOGE4ZDNiNzUNX/Ve: 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:19.928 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: 2s 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: ]] 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTVkYTgyZjUwYzNmMzQ5MGZlYTY2MzJmMWEyZTUwZDAyMzA2YzFmMGI2MDhjZGI1aB1anQ==: 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:22.457 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:24.355 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:24.355 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:20:24.355 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:24.355 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:20:24.355 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:24.355 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:20:24.355 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:20:24.355 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.356 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:24.356 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.356 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.356 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.356 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.356 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.356 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:25.334 nvme0n1 00:20:25.334 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:25.334 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.334 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.334 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.334 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:25.334 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:26.266 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:26.266 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:26.266 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.527 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.527 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:26.527 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.527 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.527 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.527 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:26.527 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:26.785 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:26.785 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:26.785 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:27.043 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:27.976 request: 00:20:27.976 { 00:20:27.976 "name": "nvme0", 00:20:27.976 "dhchap_key": "key1", 00:20:27.976 "dhchap_ctrlr_key": "key3", 00:20:27.976 "method": "bdev_nvme_set_keys", 00:20:27.976 "req_id": 1 00:20:27.976 } 00:20:27.976 Got JSON-RPC error response 00:20:27.976 response: 00:20:27.976 { 00:20:27.976 "code": -13, 00:20:27.976 "message": "Permission denied" 00:20:27.976 } 00:20:27.976 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:27.976 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:27.976 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:27.976 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:27.976 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:27.976 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:27.976 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.234 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:28.234 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:29.168 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:29.168 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:29.168 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.426 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:29.426 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:29.426 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.426 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.426 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.426 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:29.426 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:29.426 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:30.800 nvme0n1 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:30.800 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:31.735 request: 00:20:31.735 { 00:20:31.735 "name": "nvme0", 00:20:31.735 "dhchap_key": "key2", 00:20:31.735 "dhchap_ctrlr_key": "key0", 00:20:31.735 "method": "bdev_nvme_set_keys", 00:20:31.735 "req_id": 1 00:20:31.735 } 00:20:31.735 Got JSON-RPC error response 00:20:31.735 response: 00:20:31.735 { 00:20:31.735 "code": -13, 00:20:31.735 "message": "Permission denied" 00:20:31.735 } 00:20:31.735 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:31.735 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:31.735 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:31.735 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:31.735 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:31.735 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:31.735 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.993 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:31.993 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:32.926 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:32.926 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:32.926 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.185 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:33.185 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:33.185 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:33.185 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 814673 00:20:33.185 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 814673 ']' 00:20:33.185 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 814673 00:20:33.185 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:33.185 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.185 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 814673 00:20:33.443 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:33.443 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:33.443 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 814673' 00:20:33.443 killing process with pid 814673 00:20:33.443 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 814673 00:20:33.443 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 814673 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.700 rmmod nvme_tcp 00:20:33.700 rmmod nvme_fabrics 00:20:33.700 rmmod nvme_keyring 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 837357 ']' 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 837357 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 837357 ']' 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 837357 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.700 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 837357 00:20:33.701 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:33.701 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:33.701 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 837357' 00:20:33.701 killing process with pid 837357 00:20:33.701 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 837357 00:20:33.701 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 837357 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.959 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.493 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:36.493 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.v0j /tmp/spdk.key-sha256.lfE /tmp/spdk.key-sha384.LD9 /tmp/spdk.key-sha512.XCz /tmp/spdk.key-sha512.YGe /tmp/spdk.key-sha384.SxW /tmp/spdk.key-sha256.EsH '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:36.493 00:20:36.493 real 3m31.804s 00:20:36.493 user 8m17.797s 00:20:36.493 sys 0m28.036s 00:20:36.493 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.493 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.493 ************************************ 00:20:36.493 END TEST nvmf_auth_target 00:20:36.494 ************************************ 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:36.494 ************************************ 00:20:36.494 START TEST nvmf_bdevio_no_huge 00:20:36.494 ************************************ 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:36.494 * Looking for test storage... 00:20:36.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1689 -- # lcov --version 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:20:36.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.494 --rc genhtml_branch_coverage=1 00:20:36.494 --rc genhtml_function_coverage=1 00:20:36.494 --rc genhtml_legend=1 00:20:36.494 --rc geninfo_all_blocks=1 00:20:36.494 --rc geninfo_unexecuted_blocks=1 00:20:36.494 00:20:36.494 ' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:20:36.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.494 --rc genhtml_branch_coverage=1 00:20:36.494 --rc genhtml_function_coverage=1 00:20:36.494 --rc genhtml_legend=1 00:20:36.494 --rc geninfo_all_blocks=1 00:20:36.494 --rc geninfo_unexecuted_blocks=1 00:20:36.494 00:20:36.494 ' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:20:36.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.494 --rc genhtml_branch_coverage=1 00:20:36.494 --rc genhtml_function_coverage=1 00:20:36.494 --rc genhtml_legend=1 00:20:36.494 --rc geninfo_all_blocks=1 00:20:36.494 --rc geninfo_unexecuted_blocks=1 00:20:36.494 00:20:36.494 ' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:20:36.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.494 --rc genhtml_branch_coverage=1 00:20:36.494 --rc genhtml_function_coverage=1 00:20:36.494 --rc genhtml_legend=1 00:20:36.494 --rc geninfo_all_blocks=1 00:20:36.494 --rc geninfo_unexecuted_blocks=1 00:20:36.494 00:20:36.494 ' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:36.494 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:36.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:36.495 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.397 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:38.398 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:38.398 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:38.398 Found net devices under 0000:09:00.0: cvl_0_0 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:38.398 Found net devices under 0000:09:00.1: cvl_0_1 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:20:38.398 00:20:38.398 --- 10.0.0.2 ping statistics --- 00:20:38.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.398 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:38.398 00:20:38.398 --- 10.0.0.1 ping statistics --- 00:20:38.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.398 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=842618 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 842618 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 842618 ']' 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.398 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.657 [2024-11-06 08:56:51.718297] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:20:38.657 [2024-11-06 08:56:51.718414] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:38.657 [2024-11-06 08:56:51.796155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.657 [2024-11-06 08:56:51.850556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.657 [2024-11-06 08:56:51.850616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.657 [2024-11-06 08:56:51.850640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.657 [2024-11-06 08:56:51.850650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.657 [2024-11-06 08:56:51.850660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.657 [2024-11-06 08:56:51.851706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:38.657 [2024-11-06 08:56:51.851776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:38.657 [2024-11-06 08:56:51.851802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:38.657 [2024-11-06 08:56:51.851805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.915 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.915 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:20:38.915 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:38.915 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.915 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.915 [2024-11-06 08:56:52.011822] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.915 Malloc0 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.915 [2024-11-06 08:56:52.049887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:38.915 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:38.916 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:20:38.916 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:20:38.916 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:20:38.916 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:20:38.916 { 00:20:38.916 "params": { 00:20:38.916 "name": "Nvme$subsystem", 00:20:38.916 "trtype": "$TEST_TRANSPORT", 00:20:38.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:38.916 "adrfam": "ipv4", 00:20:38.916 "trsvcid": "$NVMF_PORT", 00:20:38.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:38.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:38.916 "hdgst": ${hdgst:-false}, 00:20:38.916 "ddgst": ${ddgst:-false} 00:20:38.916 }, 00:20:38.916 "method": "bdev_nvme_attach_controller" 00:20:38.916 } 00:20:38.916 EOF 00:20:38.916 )") 00:20:38.916 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:20:38.916 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:20:38.916 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:20:38.916 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:20:38.916 "params": { 00:20:38.916 "name": "Nvme1", 00:20:38.916 "trtype": "tcp", 00:20:38.916 "traddr": "10.0.0.2", 00:20:38.916 "adrfam": "ipv4", 00:20:38.916 "trsvcid": "4420", 00:20:38.916 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.916 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.916 "hdgst": false, 00:20:38.916 "ddgst": false 00:20:38.916 }, 00:20:38.916 "method": "bdev_nvme_attach_controller" 00:20:38.916 }' 00:20:38.916 [2024-11-06 08:56:52.101209] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:20:38.916 [2024-11-06 08:56:52.101295] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid842643 ] 00:20:38.916 [2024-11-06 08:56:52.177889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:39.174 [2024-11-06 08:56:52.243794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.174 [2024-11-06 08:56:52.243852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.174 [2024-11-06 08:56:52.243857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.174 I/O targets: 00:20:39.174 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:39.174 00:20:39.174 00:20:39.174 CUnit - A unit testing framework for C - Version 2.1-3 00:20:39.174 http://cunit.sourceforge.net/ 00:20:39.174 00:20:39.174 00:20:39.174 Suite: bdevio tests on: Nvme1n1 00:20:39.432 Test: blockdev write read block ...passed 00:20:39.432 Test: blockdev write zeroes read block ...passed 00:20:39.432 Test: blockdev write zeroes read no split ...passed 00:20:39.432 Test: blockdev write zeroes read split ...passed 00:20:39.432 Test: blockdev write zeroes read split partial ...passed 00:20:39.432 Test: blockdev reset ...[2024-11-06 08:56:52.555001] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:39.432 [2024-11-06 08:56:52.555113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c46e0 (9): Bad file descriptor 00:20:39.432 [2024-11-06 08:56:52.584377] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:39.432 passed 00:20:39.432 Test: blockdev write read 8 blocks ...passed 00:20:39.432 Test: blockdev write read size > 128k ...passed 00:20:39.432 Test: blockdev write read invalid size ...passed 00:20:39.432 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:39.432 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:39.432 Test: blockdev write read max offset ...passed 00:20:39.689 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:39.689 Test: blockdev writev readv 8 blocks ...passed 00:20:39.689 Test: blockdev writev readv 30 x 1block ...passed 00:20:39.689 Test: blockdev writev readv block ...passed 00:20:39.689 Test: blockdev writev readv size > 128k ...passed 00:20:39.689 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:39.689 Test: blockdev comparev and writev ...[2024-11-06 08:56:52.917956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.689 [2024-11-06 08:56:52.917991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:39.689 [2024-11-06 08:56:52.918016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.689 [2024-11-06 08:56:52.918032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:39.689 [2024-11-06 08:56:52.918414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.689 [2024-11-06 08:56:52.918440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:39.689 [2024-11-06 08:56:52.918462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.689 [2024-11-06 08:56:52.918477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:39.689 [2024-11-06 08:56:52.918856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.689 [2024-11-06 08:56:52.918883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:39.689 [2024-11-06 08:56:52.918905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.689 [2024-11-06 08:56:52.918921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:39.689 [2024-11-06 08:56:52.919248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.689 [2024-11-06 08:56:52.919272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:39.689 [2024-11-06 08:56:52.919294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:39.689 [2024-11-06 08:56:52.919309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:39.689 passed 00:20:39.947 Test: blockdev nvme passthru rw ...passed 00:20:39.947 Test: blockdev nvme passthru vendor specific ...[2024-11-06 08:56:53.001087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.947 [2024-11-06 08:56:53.001114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:39.947 [2024-11-06 08:56:53.001249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.947 [2024-11-06 08:56:53.001272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:39.947 [2024-11-06 08:56:53.001403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.947 [2024-11-06 08:56:53.001427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:39.947 [2024-11-06 08:56:53.001565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:39.947 [2024-11-06 08:56:53.001589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:39.947 passed 00:20:39.947 Test: blockdev nvme admin passthru ...passed 00:20:39.947 Test: blockdev copy ...passed 00:20:39.947 00:20:39.947 Run Summary: Type Total Ran Passed Failed Inactive 00:20:39.947 suites 1 1 n/a 0 0 00:20:39.947 tests 23 23 23 0 0 00:20:39.947 asserts 152 152 152 0 n/a 00:20:39.947 00:20:39.947 Elapsed time = 1.243 seconds 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.205 rmmod nvme_tcp 00:20:40.205 rmmod nvme_fabrics 00:20:40.205 rmmod nvme_keyring 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 842618 ']' 00:20:40.205 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 842618 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 842618 ']' 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 842618 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 842618 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 842618' 00:20:40.206 killing process with pid 842618 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 842618 00:20:40.206 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 842618 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.773 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.678 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.678 00:20:42.678 real 0m6.612s 00:20:42.678 user 0m10.734s 00:20:42.678 sys 0m2.577s 00:20:42.678 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.678 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.678 ************************************ 00:20:42.678 END TEST nvmf_bdevio_no_huge 00:20:42.678 ************************************ 00:20:42.678 08:56:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:42.678 08:56:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:42.678 08:56:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:42.678 08:56:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.678 ************************************ 00:20:42.678 START TEST nvmf_tls 00:20:42.678 ************************************ 00:20:42.678 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:42.938 * Looking for test storage... 00:20:42.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1689 -- # lcov --version 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:20:42.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.938 --rc genhtml_branch_coverage=1 00:20:42.938 --rc genhtml_function_coverage=1 00:20:42.938 --rc genhtml_legend=1 00:20:42.938 --rc geninfo_all_blocks=1 00:20:42.938 --rc geninfo_unexecuted_blocks=1 00:20:42.938 00:20:42.938 ' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:20:42.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.938 --rc genhtml_branch_coverage=1 00:20:42.938 --rc genhtml_function_coverage=1 00:20:42.938 --rc genhtml_legend=1 00:20:42.938 --rc geninfo_all_blocks=1 00:20:42.938 --rc geninfo_unexecuted_blocks=1 00:20:42.938 00:20:42.938 ' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:20:42.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.938 --rc genhtml_branch_coverage=1 00:20:42.938 --rc genhtml_function_coverage=1 00:20:42.938 --rc genhtml_legend=1 00:20:42.938 --rc geninfo_all_blocks=1 00:20:42.938 --rc geninfo_unexecuted_blocks=1 00:20:42.938 00:20:42.938 ' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:20:42.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.938 --rc genhtml_branch_coverage=1 00:20:42.938 --rc genhtml_function_coverage=1 00:20:42.938 --rc genhtml_legend=1 00:20:42.938 --rc geninfo_all_blocks=1 00:20:42.938 --rc geninfo_unexecuted_blocks=1 00:20:42.938 00:20:42.938 ' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:42.938 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:42.939 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:42.939 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.939 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.939 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.939 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:42.939 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:42.939 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.939 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:45.471 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:45.471 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.471 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:45.472 Found net devices under 0000:09:00.0: cvl_0_0 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:45.472 Found net devices under 0000:09:00.1: cvl_0_1 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:45.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:20:45.472 00:20:45.472 --- 10.0.0.2 ping statistics --- 00:20:45.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.472 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:20:45.472 00:20:45.472 --- 10.0.0.1 ping statistics --- 00:20:45.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.472 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=844840 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 844840 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 844840 ']' 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.472 [2024-11-06 08:56:58.390902] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:20:45.472 [2024-11-06 08:56:58.390990] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.472 [2024-11-06 08:56:58.464529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.472 [2024-11-06 08:56:58.520813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.472 [2024-11-06 08:56:58.520890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.472 [2024-11-06 08:56:58.520918] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.472 [2024-11-06 08:56:58.520929] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.472 [2024-11-06 08:56:58.520939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.472 [2024-11-06 08:56:58.521533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:45.472 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:45.730 true 00:20:45.730 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:45.730 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:45.988 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:45.988 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:45.988 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:46.246 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:46.246 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:46.833 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:46.833 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:46.833 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:47.112 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:47.112 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:47.370 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:47.370 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:47.370 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:47.370 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:47.628 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:47.628 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:47.628 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:47.887 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:47.887 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:48.144 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:48.145 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:48.145 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:48.402 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:48.402 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.dj4drna4Dn 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.KCayXrdUBQ 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.dj4drna4Dn 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.KCayXrdUBQ 00:20:48.660 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:48.919 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:49.485 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.dj4drna4Dn 00:20:49.485 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.dj4drna4Dn 00:20:49.485 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.743 [2024-11-06 08:57:02.795990] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.743 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:50.002 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:50.260 [2024-11-06 08:57:03.385602] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:50.260 [2024-11-06 08:57:03.385884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.260 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:50.518 malloc0 00:20:50.518 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:50.776 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.dj4drna4Dn 00:20:51.033 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.599 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.dj4drna4Dn 00:21:01.567 Initializing NVMe Controllers 00:21:01.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:01.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:01.567 Initialization complete. Launching workers. 00:21:01.567 ======================================================== 00:21:01.567 Latency(us) 00:21:01.567 Device Information : IOPS MiB/s Average min max 00:21:01.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8668.62 33.86 7385.04 1071.54 8867.65 00:21:01.567 ======================================================== 00:21:01.567 Total : 8668.62 33.86 7385.04 1071.54 8867.65 00:21:01.567 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.dj4drna4Dn 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dj4drna4Dn 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=847371 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 847371 /var/tmp/bdevperf.sock 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 847371 ']' 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.568 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.568 [2024-11-06 08:57:14.741243] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:01.568 [2024-11-06 08:57:14.741313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid847371 ] 00:21:01.568 [2024-11-06 08:57:14.805695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.826 [2024-11-06 08:57:14.863629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.826 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.826 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:01.826 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dj4drna4Dn 00:21:02.083 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:02.342 [2024-11-06 08:57:15.606510] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.600 TLSTESTn1 00:21:02.600 08:57:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:02.600 Running I/O for 10 seconds... 00:21:04.904 3523.00 IOPS, 13.76 MiB/s [2024-11-06T07:57:19.127Z] 3547.00 IOPS, 13.86 MiB/s [2024-11-06T07:57:20.058Z] 3560.00 IOPS, 13.91 MiB/s [2024-11-06T07:57:20.990Z] 3574.75 IOPS, 13.96 MiB/s [2024-11-06T07:57:21.927Z] 3548.40 IOPS, 13.86 MiB/s [2024-11-06T07:57:22.860Z] 3547.17 IOPS, 13.86 MiB/s [2024-11-06T07:57:24.232Z] 3536.29 IOPS, 13.81 MiB/s [2024-11-06T07:57:25.164Z] 3534.00 IOPS, 13.80 MiB/s [2024-11-06T07:57:26.098Z] 3537.11 IOPS, 13.82 MiB/s [2024-11-06T07:57:26.098Z] 3533.90 IOPS, 13.80 MiB/s 00:21:12.809 Latency(us) 00:21:12.809 [2024-11-06T07:57:26.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.809 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:12.809 Verification LBA range: start 0x0 length 0x2000 00:21:12.809 TLSTESTn1 : 10.02 3539.12 13.82 0.00 0.00 36102.26 7912.87 36505.98 00:21:12.809 [2024-11-06T07:57:26.098Z] =================================================================================================================== 00:21:12.809 [2024-11-06T07:57:26.098Z] Total : 3539.12 13.82 0.00 0.00 36102.26 7912.87 36505.98 00:21:12.809 { 00:21:12.809 "results": [ 00:21:12.809 { 00:21:12.809 "job": "TLSTESTn1", 00:21:12.809 "core_mask": "0x4", 00:21:12.809 "workload": "verify", 00:21:12.809 "status": "finished", 00:21:12.809 "verify_range": { 00:21:12.809 "start": 0, 00:21:12.809 "length": 8192 00:21:12.809 }, 00:21:12.809 "queue_depth": 128, 00:21:12.809 "io_size": 4096, 00:21:12.809 "runtime": 10.020862, 00:21:12.809 "iops": 3539.11669475141, 00:21:12.809 "mibps": 13.824674588872695, 00:21:12.809 "io_failed": 0, 00:21:12.809 "io_timeout": 0, 00:21:12.809 "avg_latency_us": 36102.26372893463, 00:21:12.809 "min_latency_us": 7912.8651851851855, 00:21:12.809 "max_latency_us": 36505.97925925926 00:21:12.809 } 00:21:12.809 ], 00:21:12.809 "core_count": 1 00:21:12.809 } 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 847371 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 847371 ']' 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 847371 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 847371 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 847371' 00:21:12.809 killing process with pid 847371 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 847371 00:21:12.809 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.809 00:21:12.809 Latency(us) 00:21:12.809 [2024-11-06T07:57:26.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.809 [2024-11-06T07:57:26.098Z] =================================================================================================================== 00:21:12.809 [2024-11-06T07:57:26.098Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:12.809 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 847371 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KCayXrdUBQ 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KCayXrdUBQ 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KCayXrdUBQ 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KCayXrdUBQ 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=848695 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 848695 /var/tmp/bdevperf.sock 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 848695 ']' 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.067 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.067 [2024-11-06 08:57:26.165145] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:13.068 [2024-11-06 08:57:26.165242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848695 ] 00:21:13.068 [2024-11-06 08:57:26.230711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.068 [2024-11-06 08:57:26.286290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.326 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.326 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:13.326 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KCayXrdUBQ 00:21:13.583 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:13.840 [2024-11-06 08:57:26.934626] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.840 [2024-11-06 08:57:26.940050] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:13.840 [2024-11-06 08:57:26.940587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16582c0 (107): Transport endpoint is not connected 00:21:13.840 [2024-11-06 08:57:26.941578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16582c0 (9): Bad file descriptor 00:21:13.840 [2024-11-06 08:57:26.942576] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:13.840 [2024-11-06 08:57:26.942597] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:13.840 [2024-11-06 08:57:26.942620] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:13.840 [2024-11-06 08:57:26.942638] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:13.840 request: 00:21:13.840 { 00:21:13.840 "name": "TLSTEST", 00:21:13.840 "trtype": "tcp", 00:21:13.840 "traddr": "10.0.0.2", 00:21:13.840 "adrfam": "ipv4", 00:21:13.840 "trsvcid": "4420", 00:21:13.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.840 "prchk_reftag": false, 00:21:13.840 "prchk_guard": false, 00:21:13.840 "hdgst": false, 00:21:13.840 "ddgst": false, 00:21:13.840 "psk": "key0", 00:21:13.840 "allow_unrecognized_csi": false, 00:21:13.840 "method": "bdev_nvme_attach_controller", 00:21:13.840 "req_id": 1 00:21:13.840 } 00:21:13.840 Got JSON-RPC error response 00:21:13.840 response: 00:21:13.840 { 00:21:13.840 "code": -5, 00:21:13.840 "message": "Input/output error" 00:21:13.840 } 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 848695 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 848695 ']' 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 848695 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 848695 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 848695' 00:21:13.840 killing process with pid 848695 00:21:13.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 848695 00:21:13.840 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.840 00:21:13.840 Latency(us) 00:21:13.840 [2024-11-06T07:57:27.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.840 [2024-11-06T07:57:27.129Z] =================================================================================================================== 00:21:13.841 [2024-11-06T07:57:27.130Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:13.841 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 848695 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dj4drna4Dn 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dj4drna4Dn 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.dj4drna4Dn 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dj4drna4Dn 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=848835 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 848835 /var/tmp/bdevperf.sock 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 848835 ']' 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:14.099 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.099 [2024-11-06 08:57:27.235009] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:14.099 [2024-11-06 08:57:27.235104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848835 ] 00:21:14.099 [2024-11-06 08:57:27.301355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.099 [2024-11-06 08:57:27.359125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.357 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:14.357 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:14.357 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dj4drna4Dn 00:21:14.615 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:14.873 [2024-11-06 08:57:27.982966] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.873 [2024-11-06 08:57:27.989935] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:14.873 [2024-11-06 08:57:27.989967] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:14.873 [2024-11-06 08:57:27.990018] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:14.873 [2024-11-06 08:57:27.990395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa2c0 (107): Transport endpoint is not connected 00:21:14.873 [2024-11-06 08:57:27.991385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fa2c0 (9): Bad file descriptor 00:21:14.873 [2024-11-06 08:57:27.992384] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:14.873 [2024-11-06 08:57:27.992405] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:14.873 [2024-11-06 08:57:27.992418] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:14.873 [2024-11-06 08:57:27.992436] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:14.873 request: 00:21:14.873 { 00:21:14.873 "name": "TLSTEST", 00:21:14.873 "trtype": "tcp", 00:21:14.873 "traddr": "10.0.0.2", 00:21:14.873 "adrfam": "ipv4", 00:21:14.873 "trsvcid": "4420", 00:21:14.873 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.873 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:14.873 "prchk_reftag": false, 00:21:14.873 "prchk_guard": false, 00:21:14.873 "hdgst": false, 00:21:14.873 "ddgst": false, 00:21:14.873 "psk": "key0", 00:21:14.873 "allow_unrecognized_csi": false, 00:21:14.873 "method": "bdev_nvme_attach_controller", 00:21:14.873 "req_id": 1 00:21:14.873 } 00:21:14.873 Got JSON-RPC error response 00:21:14.873 response: 00:21:14.873 { 00:21:14.873 "code": -5, 00:21:14.873 "message": "Input/output error" 00:21:14.873 } 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 848835 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 848835 ']' 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 848835 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 848835 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 848835' 00:21:14.873 killing process with pid 848835 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 848835 00:21:14.873 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.873 00:21:14.873 Latency(us) 00:21:14.873 [2024-11-06T07:57:28.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.873 [2024-11-06T07:57:28.162Z] =================================================================================================================== 00:21:14.873 [2024-11-06T07:57:28.162Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:14.873 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 848835 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dj4drna4Dn 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dj4drna4Dn 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.dj4drna4Dn 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.dj4drna4Dn 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=848973 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 848973 /var/tmp/bdevperf.sock 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 848973 ']' 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.131 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.131 [2024-11-06 08:57:28.321607] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:15.132 [2024-11-06 08:57:28.321699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid848973 ] 00:21:15.132 [2024-11-06 08:57:28.390536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.390 [2024-11-06 08:57:28.449524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.390 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.390 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:15.390 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.dj4drna4Dn 00:21:15.648 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:15.906 [2024-11-06 08:57:29.073014] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.906 [2024-11-06 08:57:29.082650] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:15.906 [2024-11-06 08:57:29.082679] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:15.906 [2024-11-06 08:57:29.082729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:15.906 [2024-11-06 08:57:29.083121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25462c0 (107): Transport endpoint is not connected 00:21:15.906 [2024-11-06 08:57:29.084112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25462c0 (9): Bad file descriptor 00:21:15.906 [2024-11-06 08:57:29.085111] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:15.906 [2024-11-06 08:57:29.085161] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:15.906 [2024-11-06 08:57:29.085175] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:15.906 [2024-11-06 08:57:29.085193] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:15.906 request: 00:21:15.906 { 00:21:15.906 "name": "TLSTEST", 00:21:15.906 "trtype": "tcp", 00:21:15.906 "traddr": "10.0.0.2", 00:21:15.906 "adrfam": "ipv4", 00:21:15.906 "trsvcid": "4420", 00:21:15.906 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:15.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.906 "prchk_reftag": false, 00:21:15.906 "prchk_guard": false, 00:21:15.906 "hdgst": false, 00:21:15.906 "ddgst": false, 00:21:15.906 "psk": "key0", 00:21:15.906 "allow_unrecognized_csi": false, 00:21:15.906 "method": "bdev_nvme_attach_controller", 00:21:15.906 "req_id": 1 00:21:15.906 } 00:21:15.906 Got JSON-RPC error response 00:21:15.906 response: 00:21:15.906 { 00:21:15.906 "code": -5, 00:21:15.906 "message": "Input/output error" 00:21:15.906 } 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 848973 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 848973 ']' 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 848973 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 848973 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 848973' 00:21:15.906 killing process with pid 848973 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 848973 00:21:15.906 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.906 00:21:15.906 Latency(us) 00:21:15.906 [2024-11-06T07:57:29.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.906 [2024-11-06T07:57:29.195Z] =================================================================================================================== 00:21:15.906 [2024-11-06T07:57:29.195Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:15.906 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 848973 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=849114 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 849114 /var/tmp/bdevperf.sock 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 849114 ']' 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:16.164 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.164 [2024-11-06 08:57:29.416269] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:16.164 [2024-11-06 08:57:29.416363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid849114 ] 00:21:16.422 [2024-11-06 08:57:29.484736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.422 [2024-11-06 08:57:29.545121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.422 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.422 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:16.422 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:16.986 [2024-11-06 08:57:29.969150] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:16.986 [2024-11-06 08:57:29.969191] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:16.986 request: 00:21:16.986 { 00:21:16.986 "name": "key0", 00:21:16.986 "path": "", 00:21:16.986 "method": "keyring_file_add_key", 00:21:16.986 "req_id": 1 00:21:16.986 } 00:21:16.986 Got JSON-RPC error response 00:21:16.986 response: 00:21:16.986 { 00:21:16.986 "code": -1, 00:21:16.986 "message": "Operation not permitted" 00:21:16.986 } 00:21:16.986 08:57:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:16.986 [2024-11-06 08:57:30.254057] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.986 [2024-11-06 08:57:30.254140] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:16.986 request: 00:21:16.986 { 00:21:16.986 "name": "TLSTEST", 00:21:16.986 "trtype": "tcp", 00:21:16.986 "traddr": "10.0.0.2", 00:21:16.986 "adrfam": "ipv4", 00:21:16.986 "trsvcid": "4420", 00:21:16.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.986 "prchk_reftag": false, 00:21:16.986 "prchk_guard": false, 00:21:16.986 "hdgst": false, 00:21:16.986 "ddgst": false, 00:21:16.986 "psk": "key0", 00:21:16.986 "allow_unrecognized_csi": false, 00:21:16.986 "method": "bdev_nvme_attach_controller", 00:21:16.986 "req_id": 1 00:21:16.986 } 00:21:16.986 Got JSON-RPC error response 00:21:16.986 response: 00:21:16.986 { 00:21:16.986 "code": -126, 00:21:16.986 "message": "Required key not available" 00:21:16.986 } 00:21:16.986 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 849114 00:21:16.986 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 849114 ']' 00:21:16.986 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 849114 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 849114 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 849114' 00:21:17.245 killing process with pid 849114 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 849114 00:21:17.245 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.245 00:21:17.245 Latency(us) 00:21:17.245 [2024-11-06T07:57:30.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.245 [2024-11-06T07:57:30.534Z] =================================================================================================================== 00:21:17.245 [2024-11-06T07:57:30.534Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 849114 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 844840 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 844840 ']' 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 844840 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.245 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 844840 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 844840' 00:21:17.505 killing process with pid 844840 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 844840 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 844840 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:21:17.505 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.4JdmXq92Yv 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.4JdmXq92Yv 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=849330 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 849330 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 849330 ']' 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:17.764 08:57:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.764 [2024-11-06 08:57:30.884947] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:17.764 [2024-11-06 08:57:30.885048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.764 [2024-11-06 08:57:30.955293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.764 [2024-11-06 08:57:31.007019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.764 [2024-11-06 08:57:31.007073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.764 [2024-11-06 08:57:31.007087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.764 [2024-11-06 08:57:31.007099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.764 [2024-11-06 08:57:31.007122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.764 [2024-11-06 08:57:31.007654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.022 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:18.022 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:18.022 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:18.022 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.022 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.022 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.022 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.4JdmXq92Yv 00:21:18.022 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4JdmXq92Yv 00:21:18.022 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.280 [2024-11-06 08:57:31.384948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.280 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.538 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.796 [2024-11-06 08:57:31.926458] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.796 [2024-11-06 08:57:31.926754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.796 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:19.054 malloc0 00:21:19.054 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:19.312 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4JdmXq92Yv 00:21:19.570 08:57:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4JdmXq92Yv 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4JdmXq92Yv 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=849559 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 849559 /var/tmp/bdevperf.sock 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 849559 ']' 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.828 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.828 [2024-11-06 08:57:33.062178] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:19.828 [2024-11-06 08:57:33.062279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid849559 ] 00:21:20.086 [2024-11-06 08:57:33.131218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.086 [2024-11-06 08:57:33.189090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.086 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.086 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:20.086 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4JdmXq92Yv 00:21:20.344 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:20.602 [2024-11-06 08:57:33.804287] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.602 TLSTESTn1 00:21:20.860 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:20.860 Running I/O for 10 seconds... 00:21:23.166 3469.00 IOPS, 13.55 MiB/s [2024-11-06T07:57:37.090Z] 3491.00 IOPS, 13.64 MiB/s [2024-11-06T07:57:38.041Z] 3496.67 IOPS, 13.66 MiB/s [2024-11-06T07:57:39.414Z] 3510.50 IOPS, 13.71 MiB/s [2024-11-06T07:57:40.346Z] 3528.60 IOPS, 13.78 MiB/s [2024-11-06T07:57:41.278Z] 3525.83 IOPS, 13.77 MiB/s [2024-11-06T07:57:42.212Z] 3534.86 IOPS, 13.81 MiB/s [2024-11-06T07:57:43.144Z] 3518.50 IOPS, 13.74 MiB/s [2024-11-06T07:57:44.077Z] 3504.00 IOPS, 13.69 MiB/s [2024-11-06T07:57:44.077Z] 3512.40 IOPS, 13.72 MiB/s 00:21:30.788 Latency(us) 00:21:30.788 [2024-11-06T07:57:44.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.788 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:30.788 Verification LBA range: start 0x0 length 0x2000 00:21:30.788 TLSTESTn1 : 10.03 3515.26 13.73 0.00 0.00 36348.41 7233.23 30486.38 00:21:30.788 [2024-11-06T07:57:44.077Z] =================================================================================================================== 00:21:30.788 [2024-11-06T07:57:44.077Z] Total : 3515.26 13.73 0.00 0.00 36348.41 7233.23 30486.38 00:21:30.788 { 00:21:30.788 "results": [ 00:21:30.788 { 00:21:30.788 "job": "TLSTESTn1", 00:21:30.788 "core_mask": "0x4", 00:21:30.788 "workload": "verify", 00:21:30.788 "status": "finished", 00:21:30.788 "verify_range": { 00:21:30.788 "start": 0, 00:21:30.788 "length": 8192 00:21:30.788 }, 00:21:30.788 "queue_depth": 128, 00:21:30.788 "io_size": 4096, 00:21:30.788 "runtime": 10.027695, 00:21:30.788 "iops": 3515.2644750363866, 00:21:30.788 "mibps": 13.731501855610885, 00:21:30.788 "io_failed": 0, 00:21:30.788 "io_timeout": 0, 00:21:30.788 "avg_latency_us": 36348.40709682164, 00:21:30.788 "min_latency_us": 7233.2325925925925, 00:21:30.788 "max_latency_us": 30486.376296296297 00:21:30.788 } 00:21:30.788 ], 00:21:30.788 "core_count": 1 00:21:30.788 } 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 849559 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 849559 ']' 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 849559 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 849559 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 849559' 00:21:31.046 killing process with pid 849559 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 849559 00:21:31.046 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.046 00:21:31.046 Latency(us) 00:21:31.046 [2024-11-06T07:57:44.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.046 [2024-11-06T07:57:44.335Z] =================================================================================================================== 00:21:31.046 [2024-11-06T07:57:44.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.046 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 849559 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.4JdmXq92Yv 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4JdmXq92Yv 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4JdmXq92Yv 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4JdmXq92Yv 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4JdmXq92Yv 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=850881 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 850881 /var/tmp/bdevperf.sock 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 850881 ']' 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:31.305 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.305 [2024-11-06 08:57:44.403681] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:31.305 [2024-11-06 08:57:44.403774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850881 ] 00:21:31.305 [2024-11-06 08:57:44.476290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.305 [2024-11-06 08:57:44.535569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.563 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.563 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:31.563 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4JdmXq92Yv 00:21:31.821 [2024-11-06 08:57:44.900664] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4JdmXq92Yv': 0100666 00:21:31.821 [2024-11-06 08:57:44.900706] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:31.821 request: 00:21:31.821 { 00:21:31.821 "name": "key0", 00:21:31.821 "path": "/tmp/tmp.4JdmXq92Yv", 00:21:31.821 "method": "keyring_file_add_key", 00:21:31.821 "req_id": 1 00:21:31.821 } 00:21:31.821 Got JSON-RPC error response 00:21:31.821 response: 00:21:31.821 { 00:21:31.821 "code": -1, 00:21:31.821 "message": "Operation not permitted" 00:21:31.821 } 00:21:31.821 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:32.080 [2024-11-06 08:57:45.165486] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.080 [2024-11-06 08:57:45.165546] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:32.080 request: 00:21:32.080 { 00:21:32.080 "name": "TLSTEST", 00:21:32.080 "trtype": "tcp", 00:21:32.080 "traddr": "10.0.0.2", 00:21:32.080 "adrfam": "ipv4", 00:21:32.080 "trsvcid": "4420", 00:21:32.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.080 "prchk_reftag": false, 00:21:32.080 "prchk_guard": false, 00:21:32.080 "hdgst": false, 00:21:32.080 "ddgst": false, 00:21:32.080 "psk": "key0", 00:21:32.080 "allow_unrecognized_csi": false, 00:21:32.080 "method": "bdev_nvme_attach_controller", 00:21:32.080 "req_id": 1 00:21:32.080 } 00:21:32.080 Got JSON-RPC error response 00:21:32.080 response: 00:21:32.080 { 00:21:32.080 "code": -126, 00:21:32.080 "message": "Required key not available" 00:21:32.080 } 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 850881 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 850881 ']' 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 850881 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 850881 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 850881' 00:21:32.080 killing process with pid 850881 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 850881 00:21:32.080 Received shutdown signal, test time was about 10.000000 seconds 00:21:32.080 00:21:32.080 Latency(us) 00:21:32.080 [2024-11-06T07:57:45.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.080 [2024-11-06T07:57:45.369Z] =================================================================================================================== 00:21:32.080 [2024-11-06T07:57:45.369Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:32.080 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 850881 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 849330 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 849330 ']' 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 849330 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 849330 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 849330' 00:21:32.338 killing process with pid 849330 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 849330 00:21:32.338 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 849330 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=851146 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 851146 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 851146 ']' 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:32.597 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.597 [2024-11-06 08:57:45.759771] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:32.597 [2024-11-06 08:57:45.759893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.597 [2024-11-06 08:57:45.829588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.597 [2024-11-06 08:57:45.876817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.597 [2024-11-06 08:57:45.876884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.597 [2024-11-06 08:57:45.876907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.597 [2024-11-06 08:57:45.876919] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.597 [2024-11-06 08:57:45.876929] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.597 [2024-11-06 08:57:45.877463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.855 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:32.855 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:32.855 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:32.855 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:32.855 08:57:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.4JdmXq92Yv 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.4JdmXq92Yv 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.4JdmXq92Yv 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4JdmXq92Yv 00:21:32.855 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:33.113 [2024-11-06 08:57:46.265899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.113 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:33.372 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:33.630 [2024-11-06 08:57:46.811369] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.630 [2024-11-06 08:57:46.811618] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.630 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:33.887 malloc0 00:21:33.887 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:34.145 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4JdmXq92Yv 00:21:34.403 [2024-11-06 08:57:47.599917] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.4JdmXq92Yv': 0100666 00:21:34.403 [2024-11-06 08:57:47.599960] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:34.403 request: 00:21:34.403 { 00:21:34.403 "name": "key0", 00:21:34.403 "path": "/tmp/tmp.4JdmXq92Yv", 00:21:34.403 "method": "keyring_file_add_key", 00:21:34.403 "req_id": 1 00:21:34.403 } 00:21:34.403 Got JSON-RPC error response 00:21:34.403 response: 00:21:34.403 { 00:21:34.403 "code": -1, 00:21:34.403 "message": "Operation not permitted" 00:21:34.403 } 00:21:34.403 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:34.661 [2024-11-06 08:57:47.864682] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:34.661 [2024-11-06 08:57:47.864745] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:34.661 request: 00:21:34.661 { 00:21:34.661 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.661 "host": "nqn.2016-06.io.spdk:host1", 00:21:34.661 "psk": "key0", 00:21:34.661 "method": "nvmf_subsystem_add_host", 00:21:34.661 "req_id": 1 00:21:34.661 } 00:21:34.661 Got JSON-RPC error response 00:21:34.661 response: 00:21:34.661 { 00:21:34.661 "code": -32603, 00:21:34.661 "message": "Internal error" 00:21:34.661 } 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 851146 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 851146 ']' 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 851146 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 851146 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 851146' 00:21:34.661 killing process with pid 851146 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 851146 00:21:34.661 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 851146 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.4JdmXq92Yv 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=851449 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 851449 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 851449 ']' 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.919 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.919 [2024-11-06 08:57:48.192050] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:34.919 [2024-11-06 08:57:48.192130] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.177 [2024-11-06 08:57:48.263002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.177 [2024-11-06 08:57:48.316868] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.177 [2024-11-06 08:57:48.316920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.177 [2024-11-06 08:57:48.316944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.177 [2024-11-06 08:57:48.316955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.177 [2024-11-06 08:57:48.316964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.177 [2024-11-06 08:57:48.317493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.177 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.177 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:35.177 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:35.177 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.177 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.177 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.177 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.4JdmXq92Yv 00:21:35.177 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4JdmXq92Yv 00:21:35.177 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:35.439 [2024-11-06 08:57:48.694861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.439 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:35.699 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:35.956 [2024-11-06 08:57:49.236325] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.956 [2024-11-06 08:57:49.236577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.214 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:36.474 malloc0 00:21:36.474 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:36.734 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4JdmXq92Yv 00:21:36.992 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=851735 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 851735 /var/tmp/bdevperf.sock 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 851735 ']' 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:37.250 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.250 [2024-11-06 08:57:50.529640] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:37.250 [2024-11-06 08:57:50.529747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851735 ] 00:21:37.508 [2024-11-06 08:57:50.599376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.508 [2024-11-06 08:57:50.657012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.765 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:37.765 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:37.765 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4JdmXq92Yv 00:21:38.023 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:38.281 [2024-11-06 08:57:51.406884] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.281 TLSTESTn1 00:21:38.281 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:38.846 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:38.846 "subsystems": [ 00:21:38.846 { 00:21:38.846 "subsystem": "keyring", 00:21:38.846 "config": [ 00:21:38.846 { 00:21:38.846 "method": "keyring_file_add_key", 00:21:38.846 "params": { 00:21:38.846 "name": "key0", 00:21:38.846 "path": "/tmp/tmp.4JdmXq92Yv" 00:21:38.846 } 00:21:38.846 } 00:21:38.846 ] 00:21:38.846 }, 00:21:38.846 { 00:21:38.846 "subsystem": "iobuf", 00:21:38.846 "config": [ 00:21:38.846 { 00:21:38.846 "method": "iobuf_set_options", 00:21:38.846 "params": { 00:21:38.846 "small_pool_count": 8192, 00:21:38.846 "large_pool_count": 1024, 00:21:38.846 "small_bufsize": 8192, 00:21:38.846 "large_bufsize": 135168, 00:21:38.846 "enable_numa": false 00:21:38.846 } 00:21:38.846 } 00:21:38.846 ] 00:21:38.846 }, 00:21:38.846 { 00:21:38.846 "subsystem": "sock", 00:21:38.846 "config": [ 00:21:38.846 { 00:21:38.846 "method": "sock_set_default_impl", 00:21:38.846 "params": { 00:21:38.846 "impl_name": "posix" 00:21:38.846 } 00:21:38.846 }, 00:21:38.846 { 00:21:38.846 "method": "sock_impl_set_options", 00:21:38.846 "params": { 00:21:38.846 "impl_name": "ssl", 00:21:38.846 "recv_buf_size": 4096, 00:21:38.846 "send_buf_size": 4096, 00:21:38.846 "enable_recv_pipe": true, 00:21:38.846 "enable_quickack": false, 00:21:38.846 "enable_placement_id": 0, 00:21:38.846 "enable_zerocopy_send_server": true, 00:21:38.846 "enable_zerocopy_send_client": false, 00:21:38.846 "zerocopy_threshold": 0, 00:21:38.846 "tls_version": 0, 00:21:38.846 "enable_ktls": false 00:21:38.846 } 00:21:38.846 }, 00:21:38.846 { 00:21:38.847 "method": "sock_impl_set_options", 00:21:38.847 "params": { 00:21:38.847 "impl_name": "posix", 00:21:38.847 "recv_buf_size": 2097152, 00:21:38.847 "send_buf_size": 2097152, 00:21:38.847 "enable_recv_pipe": true, 00:21:38.847 "enable_quickack": false, 00:21:38.847 "enable_placement_id": 0, 00:21:38.847 "enable_zerocopy_send_server": true, 00:21:38.847 "enable_zerocopy_send_client": false, 00:21:38.847 "zerocopy_threshold": 0, 00:21:38.847 "tls_version": 0, 00:21:38.847 "enable_ktls": false 00:21:38.847 } 00:21:38.847 } 00:21:38.847 ] 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "subsystem": "vmd", 00:21:38.847 "config": [] 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "subsystem": "accel", 00:21:38.847 "config": [ 00:21:38.847 { 00:21:38.847 "method": "accel_set_options", 00:21:38.847 "params": { 00:21:38.847 "small_cache_size": 128, 00:21:38.847 "large_cache_size": 16, 00:21:38.847 "task_count": 2048, 00:21:38.847 "sequence_count": 2048, 00:21:38.847 "buf_count": 2048 00:21:38.847 } 00:21:38.847 } 00:21:38.847 ] 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "subsystem": "bdev", 00:21:38.847 "config": [ 00:21:38.847 { 00:21:38.847 "method": "bdev_set_options", 00:21:38.847 "params": { 00:21:38.847 "bdev_io_pool_size": 65535, 00:21:38.847 "bdev_io_cache_size": 256, 00:21:38.847 "bdev_auto_examine": true, 00:21:38.847 "iobuf_small_cache_size": 128, 00:21:38.847 "iobuf_large_cache_size": 16 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "bdev_raid_set_options", 00:21:38.847 "params": { 00:21:38.847 "process_window_size_kb": 1024, 00:21:38.847 "process_max_bandwidth_mb_sec": 0 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "bdev_iscsi_set_options", 00:21:38.847 "params": { 00:21:38.847 "timeout_sec": 30 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "bdev_nvme_set_options", 00:21:38.847 "params": { 00:21:38.847 "action_on_timeout": "none", 00:21:38.847 "timeout_us": 0, 00:21:38.847 "timeout_admin_us": 0, 00:21:38.847 "keep_alive_timeout_ms": 10000, 00:21:38.847 "arbitration_burst": 0, 00:21:38.847 "low_priority_weight": 0, 00:21:38.847 "medium_priority_weight": 0, 00:21:38.847 "high_priority_weight": 0, 00:21:38.847 "nvme_adminq_poll_period_us": 10000, 00:21:38.847 "nvme_ioq_poll_period_us": 0, 00:21:38.847 "io_queue_requests": 0, 00:21:38.847 "delay_cmd_submit": true, 00:21:38.847 "transport_retry_count": 4, 00:21:38.847 "bdev_retry_count": 3, 00:21:38.847 "transport_ack_timeout": 0, 00:21:38.847 "ctrlr_loss_timeout_sec": 0, 00:21:38.847 "reconnect_delay_sec": 0, 00:21:38.847 "fast_io_fail_timeout_sec": 0, 00:21:38.847 "disable_auto_failback": false, 00:21:38.847 "generate_uuids": false, 00:21:38.847 "transport_tos": 0, 00:21:38.847 "nvme_error_stat": false, 00:21:38.847 "rdma_srq_size": 0, 00:21:38.847 "io_path_stat": false, 00:21:38.847 "allow_accel_sequence": false, 00:21:38.847 "rdma_max_cq_size": 0, 00:21:38.847 "rdma_cm_event_timeout_ms": 0, 00:21:38.847 "dhchap_digests": [ 00:21:38.847 "sha256", 00:21:38.847 "sha384", 00:21:38.847 "sha512" 00:21:38.847 ], 00:21:38.847 "dhchap_dhgroups": [ 00:21:38.847 "null", 00:21:38.847 "ffdhe2048", 00:21:38.847 "ffdhe3072", 00:21:38.847 "ffdhe4096", 00:21:38.847 "ffdhe6144", 00:21:38.847 "ffdhe8192" 00:21:38.847 ] 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "bdev_nvme_set_hotplug", 00:21:38.847 "params": { 00:21:38.847 "period_us": 100000, 00:21:38.847 "enable": false 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "bdev_malloc_create", 00:21:38.847 "params": { 00:21:38.847 "name": "malloc0", 00:21:38.847 "num_blocks": 8192, 00:21:38.847 "block_size": 4096, 00:21:38.847 "physical_block_size": 4096, 00:21:38.847 "uuid": "16f6fa59-122c-494e-b4d7-9005ba623744", 00:21:38.847 "optimal_io_boundary": 0, 00:21:38.847 "md_size": 0, 00:21:38.847 "dif_type": 0, 00:21:38.847 "dif_is_head_of_md": false, 00:21:38.847 "dif_pi_format": 0 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "bdev_wait_for_examine" 00:21:38.847 } 00:21:38.847 ] 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "subsystem": "nbd", 00:21:38.847 "config": [] 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "subsystem": "scheduler", 00:21:38.847 "config": [ 00:21:38.847 { 00:21:38.847 "method": "framework_set_scheduler", 00:21:38.847 "params": { 00:21:38.847 "name": "static" 00:21:38.847 } 00:21:38.847 } 00:21:38.847 ] 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "subsystem": "nvmf", 00:21:38.847 "config": [ 00:21:38.847 { 00:21:38.847 "method": "nvmf_set_config", 00:21:38.847 "params": { 00:21:38.847 "discovery_filter": "match_any", 00:21:38.847 "admin_cmd_passthru": { 00:21:38.847 "identify_ctrlr": false 00:21:38.847 }, 00:21:38.847 "dhchap_digests": [ 00:21:38.847 "sha256", 00:21:38.847 "sha384", 00:21:38.847 "sha512" 00:21:38.847 ], 00:21:38.847 "dhchap_dhgroups": [ 00:21:38.847 "null", 00:21:38.847 "ffdhe2048", 00:21:38.847 "ffdhe3072", 00:21:38.847 "ffdhe4096", 00:21:38.847 "ffdhe6144", 00:21:38.847 "ffdhe8192" 00:21:38.847 ] 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "nvmf_set_max_subsystems", 00:21:38.847 "params": { 00:21:38.847 "max_subsystems": 1024 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "nvmf_set_crdt", 00:21:38.847 "params": { 00:21:38.847 "crdt1": 0, 00:21:38.847 "crdt2": 0, 00:21:38.847 "crdt3": 0 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "nvmf_create_transport", 00:21:38.847 "params": { 00:21:38.847 "trtype": "TCP", 00:21:38.847 "max_queue_depth": 128, 00:21:38.847 "max_io_qpairs_per_ctrlr": 127, 00:21:38.847 "in_capsule_data_size": 4096, 00:21:38.847 "max_io_size": 131072, 00:21:38.847 "io_unit_size": 131072, 00:21:38.847 "max_aq_depth": 128, 00:21:38.847 "num_shared_buffers": 511, 00:21:38.847 "buf_cache_size": 4294967295, 00:21:38.847 "dif_insert_or_strip": false, 00:21:38.847 "zcopy": false, 00:21:38.847 "c2h_success": false, 00:21:38.847 "sock_priority": 0, 00:21:38.847 "abort_timeout_sec": 1, 00:21:38.847 "ack_timeout": 0, 00:21:38.847 "data_wr_pool_size": 0 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "nvmf_create_subsystem", 00:21:38.847 "params": { 00:21:38.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.847 "allow_any_host": false, 00:21:38.847 "serial_number": "SPDK00000000000001", 00:21:38.847 "model_number": "SPDK bdev Controller", 00:21:38.847 "max_namespaces": 10, 00:21:38.847 "min_cntlid": 1, 00:21:38.847 "max_cntlid": 65519, 00:21:38.847 "ana_reporting": false 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "nvmf_subsystem_add_host", 00:21:38.847 "params": { 00:21:38.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.847 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.847 "psk": "key0" 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "nvmf_subsystem_add_ns", 00:21:38.847 "params": { 00:21:38.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.847 "namespace": { 00:21:38.847 "nsid": 1, 00:21:38.847 "bdev_name": "malloc0", 00:21:38.847 "nguid": "16F6FA59122C494EB4D79005BA623744", 00:21:38.847 "uuid": "16f6fa59-122c-494e-b4d7-9005ba623744", 00:21:38.847 "no_auto_visible": false 00:21:38.847 } 00:21:38.847 } 00:21:38.847 }, 00:21:38.847 { 00:21:38.847 "method": "nvmf_subsystem_add_listener", 00:21:38.848 "params": { 00:21:38.848 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.848 "listen_address": { 00:21:38.848 "trtype": "TCP", 00:21:38.848 "adrfam": "IPv4", 00:21:38.848 "traddr": "10.0.0.2", 00:21:38.848 "trsvcid": "4420" 00:21:38.848 }, 00:21:38.848 "secure_channel": true 00:21:38.848 } 00:21:38.848 } 00:21:38.848 ] 00:21:38.848 } 00:21:38.848 ] 00:21:38.848 }' 00:21:38.848 08:57:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:39.106 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:39.106 "subsystems": [ 00:21:39.106 { 00:21:39.106 "subsystem": "keyring", 00:21:39.106 "config": [ 00:21:39.106 { 00:21:39.106 "method": "keyring_file_add_key", 00:21:39.106 "params": { 00:21:39.106 "name": "key0", 00:21:39.106 "path": "/tmp/tmp.4JdmXq92Yv" 00:21:39.106 } 00:21:39.106 } 00:21:39.106 ] 00:21:39.106 }, 00:21:39.106 { 00:21:39.106 "subsystem": "iobuf", 00:21:39.106 "config": [ 00:21:39.106 { 00:21:39.106 "method": "iobuf_set_options", 00:21:39.106 "params": { 00:21:39.106 "small_pool_count": 8192, 00:21:39.106 "large_pool_count": 1024, 00:21:39.106 "small_bufsize": 8192, 00:21:39.106 "large_bufsize": 135168, 00:21:39.106 "enable_numa": false 00:21:39.106 } 00:21:39.106 } 00:21:39.106 ] 00:21:39.106 }, 00:21:39.106 { 00:21:39.106 "subsystem": "sock", 00:21:39.106 "config": [ 00:21:39.106 { 00:21:39.106 "method": "sock_set_default_impl", 00:21:39.106 "params": { 00:21:39.106 "impl_name": "posix" 00:21:39.106 } 00:21:39.106 }, 00:21:39.106 { 00:21:39.106 "method": "sock_impl_set_options", 00:21:39.106 "params": { 00:21:39.106 "impl_name": "ssl", 00:21:39.106 "recv_buf_size": 4096, 00:21:39.106 "send_buf_size": 4096, 00:21:39.106 "enable_recv_pipe": true, 00:21:39.106 "enable_quickack": false, 00:21:39.106 "enable_placement_id": 0, 00:21:39.106 "enable_zerocopy_send_server": true, 00:21:39.106 "enable_zerocopy_send_client": false, 00:21:39.106 "zerocopy_threshold": 0, 00:21:39.106 "tls_version": 0, 00:21:39.106 "enable_ktls": false 00:21:39.106 } 00:21:39.106 }, 00:21:39.106 { 00:21:39.106 "method": "sock_impl_set_options", 00:21:39.106 "params": { 00:21:39.106 "impl_name": "posix", 00:21:39.106 "recv_buf_size": 2097152, 00:21:39.106 "send_buf_size": 2097152, 00:21:39.106 "enable_recv_pipe": true, 00:21:39.106 "enable_quickack": false, 00:21:39.106 "enable_placement_id": 0, 00:21:39.106 "enable_zerocopy_send_server": true, 00:21:39.106 "enable_zerocopy_send_client": false, 00:21:39.106 "zerocopy_threshold": 0, 00:21:39.106 "tls_version": 0, 00:21:39.106 "enable_ktls": false 00:21:39.106 } 00:21:39.106 } 00:21:39.106 ] 00:21:39.106 }, 00:21:39.106 { 00:21:39.106 "subsystem": "vmd", 00:21:39.106 "config": [] 00:21:39.106 }, 00:21:39.106 { 00:21:39.106 "subsystem": "accel", 00:21:39.106 "config": [ 00:21:39.106 { 00:21:39.106 "method": "accel_set_options", 00:21:39.106 "params": { 00:21:39.106 "small_cache_size": 128, 00:21:39.106 "large_cache_size": 16, 00:21:39.106 "task_count": 2048, 00:21:39.106 "sequence_count": 2048, 00:21:39.106 "buf_count": 2048 00:21:39.106 } 00:21:39.106 } 00:21:39.106 ] 00:21:39.106 }, 00:21:39.106 { 00:21:39.106 "subsystem": "bdev", 00:21:39.106 "config": [ 00:21:39.106 { 00:21:39.106 "method": "bdev_set_options", 00:21:39.106 "params": { 00:21:39.106 "bdev_io_pool_size": 65535, 00:21:39.107 "bdev_io_cache_size": 256, 00:21:39.107 "bdev_auto_examine": true, 00:21:39.107 "iobuf_small_cache_size": 128, 00:21:39.107 "iobuf_large_cache_size": 16 00:21:39.107 } 00:21:39.107 }, 00:21:39.107 { 00:21:39.107 "method": "bdev_raid_set_options", 00:21:39.107 "params": { 00:21:39.107 "process_window_size_kb": 1024, 00:21:39.107 "process_max_bandwidth_mb_sec": 0 00:21:39.107 } 00:21:39.107 }, 00:21:39.107 { 00:21:39.107 "method": "bdev_iscsi_set_options", 00:21:39.107 "params": { 00:21:39.107 "timeout_sec": 30 00:21:39.107 } 00:21:39.107 }, 00:21:39.107 { 00:21:39.107 "method": "bdev_nvme_set_options", 00:21:39.107 "params": { 00:21:39.107 "action_on_timeout": "none", 00:21:39.107 "timeout_us": 0, 00:21:39.107 "timeout_admin_us": 0, 00:21:39.107 "keep_alive_timeout_ms": 10000, 00:21:39.107 "arbitration_burst": 0, 00:21:39.107 "low_priority_weight": 0, 00:21:39.107 "medium_priority_weight": 0, 00:21:39.107 "high_priority_weight": 0, 00:21:39.107 "nvme_adminq_poll_period_us": 10000, 00:21:39.107 "nvme_ioq_poll_period_us": 0, 00:21:39.107 "io_queue_requests": 512, 00:21:39.107 "delay_cmd_submit": true, 00:21:39.107 "transport_retry_count": 4, 00:21:39.107 "bdev_retry_count": 3, 00:21:39.107 "transport_ack_timeout": 0, 00:21:39.107 "ctrlr_loss_timeout_sec": 0, 00:21:39.107 "reconnect_delay_sec": 0, 00:21:39.107 "fast_io_fail_timeout_sec": 0, 00:21:39.107 "disable_auto_failback": false, 00:21:39.107 "generate_uuids": false, 00:21:39.107 "transport_tos": 0, 00:21:39.107 "nvme_error_stat": false, 00:21:39.107 "rdma_srq_size": 0, 00:21:39.107 "io_path_stat": false, 00:21:39.107 "allow_accel_sequence": false, 00:21:39.107 "rdma_max_cq_size": 0, 00:21:39.107 "rdma_cm_event_timeout_ms": 0, 00:21:39.107 "dhchap_digests": [ 00:21:39.107 "sha256", 00:21:39.107 "sha384", 00:21:39.107 "sha512" 00:21:39.107 ], 00:21:39.107 "dhchap_dhgroups": [ 00:21:39.107 "null", 00:21:39.107 "ffdhe2048", 00:21:39.107 "ffdhe3072", 00:21:39.107 "ffdhe4096", 00:21:39.107 "ffdhe6144", 00:21:39.107 "ffdhe8192" 00:21:39.107 ] 00:21:39.107 } 00:21:39.107 }, 00:21:39.107 { 00:21:39.107 "method": "bdev_nvme_attach_controller", 00:21:39.107 "params": { 00:21:39.107 "name": "TLSTEST", 00:21:39.107 "trtype": "TCP", 00:21:39.107 "adrfam": "IPv4", 00:21:39.107 "traddr": "10.0.0.2", 00:21:39.107 "trsvcid": "4420", 00:21:39.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.107 "prchk_reftag": false, 00:21:39.107 "prchk_guard": false, 00:21:39.107 "ctrlr_loss_timeout_sec": 0, 00:21:39.107 "reconnect_delay_sec": 0, 00:21:39.107 "fast_io_fail_timeout_sec": 0, 00:21:39.107 "psk": "key0", 00:21:39.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.107 "hdgst": false, 00:21:39.107 "ddgst": false, 00:21:39.107 "multipath": "multipath" 00:21:39.107 } 00:21:39.107 }, 00:21:39.107 { 00:21:39.107 "method": "bdev_nvme_set_hotplug", 00:21:39.107 "params": { 00:21:39.107 "period_us": 100000, 00:21:39.107 "enable": false 00:21:39.107 } 00:21:39.107 }, 00:21:39.107 { 00:21:39.107 "method": "bdev_wait_for_examine" 00:21:39.107 } 00:21:39.107 ] 00:21:39.107 }, 00:21:39.107 { 00:21:39.107 "subsystem": "nbd", 00:21:39.107 "config": [] 00:21:39.107 } 00:21:39.107 ] 00:21:39.107 }' 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 851735 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 851735 ']' 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 851735 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 851735 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 851735' 00:21:39.107 killing process with pid 851735 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 851735 00:21:39.107 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.107 00:21:39.107 Latency(us) 00:21:39.107 [2024-11-06T07:57:52.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.107 [2024-11-06T07:57:52.396Z] =================================================================================================================== 00:21:39.107 [2024-11-06T07:57:52.396Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.107 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 851735 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 851449 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 851449 ']' 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 851449 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 851449 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 851449' 00:21:39.365 killing process with pid 851449 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 851449 00:21:39.365 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 851449 00:21:39.624 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:39.624 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:39.624 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.624 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:39.624 "subsystems": [ 00:21:39.624 { 00:21:39.624 "subsystem": "keyring", 00:21:39.624 "config": [ 00:21:39.624 { 00:21:39.624 "method": "keyring_file_add_key", 00:21:39.624 "params": { 00:21:39.624 "name": "key0", 00:21:39.624 "path": "/tmp/tmp.4JdmXq92Yv" 00:21:39.624 } 00:21:39.624 } 00:21:39.624 ] 00:21:39.624 }, 00:21:39.624 { 00:21:39.624 "subsystem": "iobuf", 00:21:39.624 "config": [ 00:21:39.624 { 00:21:39.624 "method": "iobuf_set_options", 00:21:39.624 "params": { 00:21:39.625 "small_pool_count": 8192, 00:21:39.625 "large_pool_count": 1024, 00:21:39.625 "small_bufsize": 8192, 00:21:39.625 "large_bufsize": 135168, 00:21:39.625 "enable_numa": false 00:21:39.625 } 00:21:39.625 } 00:21:39.625 ] 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "subsystem": "sock", 00:21:39.625 "config": [ 00:21:39.625 { 00:21:39.625 "method": "sock_set_default_impl", 00:21:39.625 "params": { 00:21:39.625 "impl_name": "posix" 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "sock_impl_set_options", 00:21:39.625 "params": { 00:21:39.625 "impl_name": "ssl", 00:21:39.625 "recv_buf_size": 4096, 00:21:39.625 "send_buf_size": 4096, 00:21:39.625 "enable_recv_pipe": true, 00:21:39.625 "enable_quickack": false, 00:21:39.625 "enable_placement_id": 0, 00:21:39.625 "enable_zerocopy_send_server": true, 00:21:39.625 "enable_zerocopy_send_client": false, 00:21:39.625 "zerocopy_threshold": 0, 00:21:39.625 "tls_version": 0, 00:21:39.625 "enable_ktls": false 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "sock_impl_set_options", 00:21:39.625 "params": { 00:21:39.625 "impl_name": "posix", 00:21:39.625 "recv_buf_size": 2097152, 00:21:39.625 "send_buf_size": 2097152, 00:21:39.625 "enable_recv_pipe": true, 00:21:39.625 "enable_quickack": false, 00:21:39.625 "enable_placement_id": 0, 00:21:39.625 "enable_zerocopy_send_server": true, 00:21:39.625 "enable_zerocopy_send_client": false, 00:21:39.625 "zerocopy_threshold": 0, 00:21:39.625 "tls_version": 0, 00:21:39.625 "enable_ktls": false 00:21:39.625 } 00:21:39.625 } 00:21:39.625 ] 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "subsystem": "vmd", 00:21:39.625 "config": [] 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "subsystem": "accel", 00:21:39.625 "config": [ 00:21:39.625 { 00:21:39.625 "method": "accel_set_options", 00:21:39.625 "params": { 00:21:39.625 "small_cache_size": 128, 00:21:39.625 "large_cache_size": 16, 00:21:39.625 "task_count": 2048, 00:21:39.625 "sequence_count": 2048, 00:21:39.625 "buf_count": 2048 00:21:39.625 } 00:21:39.625 } 00:21:39.625 ] 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "subsystem": "bdev", 00:21:39.625 "config": [ 00:21:39.625 { 00:21:39.625 "method": "bdev_set_options", 00:21:39.625 "params": { 00:21:39.625 "bdev_io_pool_size": 65535, 00:21:39.625 "bdev_io_cache_size": 256, 00:21:39.625 "bdev_auto_examine": true, 00:21:39.625 "iobuf_small_cache_size": 128, 00:21:39.625 "iobuf_large_cache_size": 16 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "bdev_raid_set_options", 00:21:39.625 "params": { 00:21:39.625 "process_window_size_kb": 1024, 00:21:39.625 "process_max_bandwidth_mb_sec": 0 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "bdev_iscsi_set_options", 00:21:39.625 "params": { 00:21:39.625 "timeout_sec": 30 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "bdev_nvme_set_options", 00:21:39.625 "params": { 00:21:39.625 "action_on_timeout": "none", 00:21:39.625 "timeout_us": 0, 00:21:39.625 "timeout_admin_us": 0, 00:21:39.625 "keep_alive_timeout_ms": 10000, 00:21:39.625 "arbitration_burst": 0, 00:21:39.625 "low_priority_weight": 0, 00:21:39.625 "medium_priority_weight": 0, 00:21:39.625 "high_priority_weight": 0, 00:21:39.625 "nvme_adminq_poll_period_us": 10000, 00:21:39.625 "nvme_ioq_poll_period_us": 0, 00:21:39.625 "io_queue_requests": 0, 00:21:39.625 "delay_cmd_submit": true, 00:21:39.625 "transport_retry_count": 4, 00:21:39.625 "bdev_retry_count": 3, 00:21:39.625 "transport_ack_timeout": 0, 00:21:39.625 "ctrlr_loss_timeout_sec": 0, 00:21:39.625 "reconnect_delay_sec": 0, 00:21:39.625 "fast_io_fail_timeout_sec": 0, 00:21:39.625 "disable_auto_failback": false, 00:21:39.625 "generate_uuids": false, 00:21:39.625 "transport_tos": 0, 00:21:39.625 "nvme_error_stat": false, 00:21:39.625 "rdma_srq_size": 0, 00:21:39.625 "io_path_stat": false, 00:21:39.625 "allow_accel_sequence": false, 00:21:39.625 "rdma_max_cq_size": 0, 00:21:39.625 "rdma_cm_event_timeout_ms": 0, 00:21:39.625 "dhchap_digests": [ 00:21:39.625 "sha256", 00:21:39.625 "sha384", 00:21:39.625 "sha512" 00:21:39.625 ], 00:21:39.625 "dhchap_dhgroups": [ 00:21:39.625 "null", 00:21:39.625 "ffdhe2048", 00:21:39.625 "ffdhe3072", 00:21:39.625 "ffdhe4096", 00:21:39.625 "ffdhe6144", 00:21:39.625 "ffdhe8192" 00:21:39.625 ] 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "bdev_nvme_set_hotplug", 00:21:39.625 "params": { 00:21:39.625 "period_us": 100000, 00:21:39.625 "enable": false 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "bdev_malloc_create", 00:21:39.625 "params": { 00:21:39.625 "name": "malloc0", 00:21:39.625 "num_blocks": 8192, 00:21:39.625 "block_size": 4096, 00:21:39.625 "physical_block_size": 4096, 00:21:39.625 "uuid": "16f6fa59-122c-494e-b4d7-9005ba623744", 00:21:39.625 "optimal_io_boundary": 0, 00:21:39.625 "md_size": 0, 00:21:39.625 "dif_type": 0, 00:21:39.625 "dif_is_head_of_md": false, 00:21:39.625 "dif_pi_format": 0 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "bdev_wait_for_examine" 00:21:39.625 } 00:21:39.625 ] 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "subsystem": "nbd", 00:21:39.625 "config": [] 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "subsystem": "scheduler", 00:21:39.625 "config": [ 00:21:39.625 { 00:21:39.625 "method": "framework_set_scheduler", 00:21:39.625 "params": { 00:21:39.625 "name": "static" 00:21:39.625 } 00:21:39.625 } 00:21:39.625 ] 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "subsystem": "nvmf", 00:21:39.625 "config": [ 00:21:39.625 { 00:21:39.625 "method": "nvmf_set_config", 00:21:39.625 "params": { 00:21:39.625 "discovery_filter": "match_any", 00:21:39.625 "admin_cmd_passthru": { 00:21:39.625 "identify_ctrlr": false 00:21:39.625 }, 00:21:39.625 "dhchap_digests": [ 00:21:39.625 "sha256", 00:21:39.625 "sha384", 00:21:39.625 "sha512" 00:21:39.625 ], 00:21:39.625 "dhchap_dhgroups": [ 00:21:39.625 "null", 00:21:39.625 "ffdhe2048", 00:21:39.625 "ffdhe3072", 00:21:39.625 "ffdhe4096", 00:21:39.625 "ffdhe6144", 00:21:39.625 "ffdhe8192" 00:21:39.625 ] 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "nvmf_set_max_subsystems", 00:21:39.625 "params": { 00:21:39.625 "max_subsystems": 1024 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "nvmf_set_crdt", 00:21:39.625 "params": { 00:21:39.625 "crdt1": 0, 00:21:39.625 "crdt2": 0, 00:21:39.625 "crdt3": 0 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.625 "method": "nvmf_create_transport", 00:21:39.625 "params": { 00:21:39.625 "trtype": "TCP", 00:21:39.625 "max_queue_depth": 128, 00:21:39.625 "max_io_qpairs_per_ctrlr": 127, 00:21:39.625 "in_capsule_data_size": 4096, 00:21:39.625 "max_io_size": 131072, 00:21:39.625 "io_unit_size": 131072, 00:21:39.625 "max_aq_depth": 128, 00:21:39.625 "num_shared_buffers": 511, 00:21:39.625 "buf_cache_size": 4294967295, 00:21:39.625 "dif_insert_or_strip": false, 00:21:39.625 "zcopy": false, 00:21:39.625 "c2h_success": false, 00:21:39.625 "sock_priority": 0, 00:21:39.625 "abort_timeout_sec": 1, 00:21:39.625 "ack_timeout": 0, 00:21:39.625 "data_wr_pool_size": 0 00:21:39.625 } 00:21:39.625 }, 00:21:39.625 { 00:21:39.626 "method": "nvmf_create_subsystem", 00:21:39.626 "params": { 00:21:39.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.626 "allow_any_host": false, 00:21:39.626 "serial_number": "SPDK00000000000001", 00:21:39.626 "model_number": "SPDK bdev Controller", 00:21:39.626 "max_namespaces": 10, 00:21:39.626 "min_cntlid": 1, 00:21:39.626 "max_cntlid": 65519, 00:21:39.626 "ana_reporting": false 00:21:39.626 } 00:21:39.626 }, 00:21:39.626 { 00:21:39.626 "method": "nvmf_subsystem_add_host", 00:21:39.626 "params": { 00:21:39.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.626 "host": "nqn.2016-06.io.spdk:host1", 00:21:39.626 "psk": "key0" 00:21:39.626 } 00:21:39.626 }, 00:21:39.626 { 00:21:39.626 "method": "nvmf_subsystem_add_ns", 00:21:39.626 "params": { 00:21:39.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.626 "namespace": { 00:21:39.626 "nsid": 1, 00:21:39.626 "bdev_name": "malloc0", 00:21:39.626 "nguid": "16F6FA59122C494EB4D79005BA623744", 00:21:39.626 "uuid": "16f6fa59-122c-494e-b4d7-9005ba623744", 00:21:39.626 "no_auto_visible": false 00:21:39.626 } 00:21:39.626 } 00:21:39.626 }, 00:21:39.626 { 00:21:39.626 "method": "nvmf_subsystem_add_listener", 00:21:39.626 "params": { 00:21:39.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.626 "listen_address": { 00:21:39.626 "trtype": "TCP", 00:21:39.626 "adrfam": "IPv4", 00:21:39.626 "traddr": "10.0.0.2", 00:21:39.626 "trsvcid": "4420" 00:21:39.626 }, 00:21:39.626 "secure_channel": true 00:21:39.626 } 00:21:39.626 } 00:21:39.626 ] 00:21:39.626 } 00:21:39.626 ] 00:21:39.626 }' 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=852014 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 852014 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 852014 ']' 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.626 08:57:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.626 [2024-11-06 08:57:52.730721] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:39.626 [2024-11-06 08:57:52.730818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.626 [2024-11-06 08:57:52.802551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.626 [2024-11-06 08:57:52.853810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.626 [2024-11-06 08:57:52.853875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.626 [2024-11-06 08:57:52.853898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.626 [2024-11-06 08:57:52.853909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.626 [2024-11-06 08:57:52.853918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.626 [2024-11-06 08:57:52.854497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.884 [2024-11-06 08:57:53.083518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.884 [2024-11-06 08:57:53.115538] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.884 [2024-11-06 08:57:53.115768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.449 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.449 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:40.449 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:40.449 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.449 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.707 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.707 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=852166 00:21:40.707 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 852166 /var/tmp/bdevperf.sock 00:21:40.707 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 852166 ']' 00:21:40.707 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.707 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:40.707 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.707 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:40.707 "subsystems": [ 00:21:40.707 { 00:21:40.707 "subsystem": "keyring", 00:21:40.707 "config": [ 00:21:40.707 { 00:21:40.707 "method": "keyring_file_add_key", 00:21:40.707 "params": { 00:21:40.707 "name": "key0", 00:21:40.707 "path": "/tmp/tmp.4JdmXq92Yv" 00:21:40.707 } 00:21:40.707 } 00:21:40.707 ] 00:21:40.707 }, 00:21:40.707 { 00:21:40.707 "subsystem": "iobuf", 00:21:40.707 "config": [ 00:21:40.707 { 00:21:40.707 "method": "iobuf_set_options", 00:21:40.707 "params": { 00:21:40.707 "small_pool_count": 8192, 00:21:40.707 "large_pool_count": 1024, 00:21:40.707 "small_bufsize": 8192, 00:21:40.707 "large_bufsize": 135168, 00:21:40.707 "enable_numa": false 00:21:40.707 } 00:21:40.707 } 00:21:40.707 ] 00:21:40.707 }, 00:21:40.707 { 00:21:40.707 "subsystem": "sock", 00:21:40.707 "config": [ 00:21:40.707 { 00:21:40.707 "method": "sock_set_default_impl", 00:21:40.707 "params": { 00:21:40.707 "impl_name": "posix" 00:21:40.707 } 00:21:40.707 }, 00:21:40.707 { 00:21:40.707 "method": "sock_impl_set_options", 00:21:40.707 "params": { 00:21:40.707 "impl_name": "ssl", 00:21:40.707 "recv_buf_size": 4096, 00:21:40.707 "send_buf_size": 4096, 00:21:40.707 "enable_recv_pipe": true, 00:21:40.707 "enable_quickack": false, 00:21:40.707 "enable_placement_id": 0, 00:21:40.707 "enable_zerocopy_send_server": true, 00:21:40.707 "enable_zerocopy_send_client": false, 00:21:40.707 "zerocopy_threshold": 0, 00:21:40.707 "tls_version": 0, 00:21:40.707 "enable_ktls": false 00:21:40.707 } 00:21:40.707 }, 00:21:40.707 { 00:21:40.707 "method": "sock_impl_set_options", 00:21:40.707 "params": { 00:21:40.707 "impl_name": "posix", 00:21:40.707 "recv_buf_size": 2097152, 00:21:40.707 "send_buf_size": 2097152, 00:21:40.707 "enable_recv_pipe": true, 00:21:40.707 "enable_quickack": false, 00:21:40.707 "enable_placement_id": 0, 00:21:40.707 "enable_zerocopy_send_server": true, 00:21:40.707 "enable_zerocopy_send_client": false, 00:21:40.707 "zerocopy_threshold": 0, 00:21:40.707 "tls_version": 0, 00:21:40.707 "enable_ktls": false 00:21:40.707 } 00:21:40.707 } 00:21:40.707 ] 00:21:40.707 }, 00:21:40.707 { 00:21:40.707 "subsystem": "vmd", 00:21:40.707 "config": [] 00:21:40.707 }, 00:21:40.707 { 00:21:40.707 "subsystem": "accel", 00:21:40.707 "config": [ 00:21:40.707 { 00:21:40.707 "method": "accel_set_options", 00:21:40.707 "params": { 00:21:40.707 "small_cache_size": 128, 00:21:40.707 "large_cache_size": 16, 00:21:40.707 "task_count": 2048, 00:21:40.707 "sequence_count": 2048, 00:21:40.708 "buf_count": 2048 00:21:40.708 } 00:21:40.708 } 00:21:40.708 ] 00:21:40.708 }, 00:21:40.708 { 00:21:40.708 "subsystem": "bdev", 00:21:40.708 "config": [ 00:21:40.708 { 00:21:40.708 "method": "bdev_set_options", 00:21:40.708 "params": { 00:21:40.708 "bdev_io_pool_size": 65535, 00:21:40.708 "bdev_io_cache_size": 256, 00:21:40.708 "bdev_auto_examine": true, 00:21:40.708 "iobuf_small_cache_size": 128, 00:21:40.708 "iobuf_large_cache_size": 16 00:21:40.708 } 00:21:40.708 }, 00:21:40.708 { 00:21:40.708 "method": "bdev_raid_set_options", 00:21:40.708 "params": { 00:21:40.708 "process_window_size_kb": 1024, 00:21:40.708 "process_max_bandwidth_mb_sec": 0 00:21:40.708 } 00:21:40.708 }, 00:21:40.708 { 00:21:40.708 "method": "bdev_iscsi_set_options", 00:21:40.708 "params": { 00:21:40.708 "timeout_sec": 30 00:21:40.708 } 00:21:40.708 }, 00:21:40.708 { 00:21:40.708 "method": "bdev_nvme_set_options", 00:21:40.708 "params": { 00:21:40.708 "action_on_timeout": "none", 00:21:40.708 "timeout_us": 0, 00:21:40.708 "timeout_admin_us": 0, 00:21:40.708 "keep_alive_timeout_ms": 10000, 00:21:40.708 "arbitration_burst": 0, 00:21:40.708 "low_priority_weight": 0, 00:21:40.708 "medium_priority_weight": 0, 00:21:40.708 "high_priority_weight": 0, 00:21:40.708 "nvme_adminq_poll_period_us": 10000, 00:21:40.708 "nvme_ioq_poll_period_us": 0, 00:21:40.708 "io_queue_requests": 512, 00:21:40.708 "delay_cmd_submit": true, 00:21:40.708 "transport_retry_count": 4, 00:21:40.708 "bdev_retry_count": 3, 00:21:40.708 "transport_ack_timeout": 0, 00:21:40.708 "ctrlr_loss_timeout_sec": 0, 00:21:40.708 "reconnect_delay_sec": 0, 00:21:40.708 "fast_io_fail_timeout_sec": 0, 00:21:40.708 "disable_auto_failback": false, 00:21:40.708 "generate_uuids": false, 00:21:40.708 "transport_tos": 0, 00:21:40.708 "nvme_error_stat": false, 00:21:40.708 "rdma_srq_size": 0, 00:21:40.708 "io_path_stat": false, 00:21:40.708 "allow_accel_sequence": false, 00:21:40.708 "rdma_max_cq_size": 0, 00:21:40.708 "rdma_cm_event_timeout_ms": 0, 00:21:40.708 "dhchap_digests": [ 00:21:40.708 "sha256", 00:21:40.708 "sha384", 00:21:40.708 "sha512" 00:21:40.708 ], 00:21:40.708 "dhchap_dhgroups": [ 00:21:40.708 "null", 00:21:40.708 "ffdhe2048", 00:21:40.708 "ffdhe3072", 00:21:40.708 "ffdhe4096", 00:21:40.708 "ffdhe6144", 00:21:40.708 "ffdhe8192" 00:21:40.708 ] 00:21:40.708 } 00:21:40.708 }, 00:21:40.708 { 00:21:40.708 "method": "bdev_nvme_attach_controller", 00:21:40.708 "params": { 00:21:40.708 "name": "TLSTEST", 00:21:40.708 "trtype": "TCP", 00:21:40.708 "adrfam": "IPv4", 00:21:40.708 "traddr": "10.0.0.2", 00:21:40.708 "trsvcid": "4420", 00:21:40.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.708 "prchk_reftag": false, 00:21:40.708 "prchk_guard": false, 00:21:40.708 "ctrlr_loss_timeout_sec": 0, 00:21:40.708 "reconnect_delay_sec": 0, 00:21:40.708 "fast_io_fail_timeout_sec": 0, 00:21:40.708 "psk": "key0", 00:21:40.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.708 "hdgst": false, 00:21:40.708 "ddgst": false, 00:21:40.708 "multipath": "multipath" 00:21:40.708 } 00:21:40.708 }, 00:21:40.708 { 00:21:40.708 "method": "bdev_nvme_set_hotplug", 00:21:40.708 "params": { 00:21:40.708 "period_us": 100000, 00:21:40.708 "enable": false 00:21:40.708 } 00:21:40.708 }, 00:21:40.708 { 00:21:40.708 "method": "bdev_wait_for_examine" 00:21:40.708 } 00:21:40.708 ] 00:21:40.708 }, 00:21:40.708 { 00:21:40.708 "subsystem": "nbd", 00:21:40.708 "config": [] 00:21:40.708 } 00:21:40.708 ] 00:21:40.708 }' 00:21:40.708 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.708 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.708 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.708 [2024-11-06 08:57:53.800442] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:40.708 [2024-11-06 08:57:53.800534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852166 ] 00:21:40.708 [2024-11-06 08:57:53.865312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.708 [2024-11-06 08:57:53.922203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.966 [2024-11-06 08:57:54.104709] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.531 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.532 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:41.532 08:57:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:41.789 Running I/O for 10 seconds... 00:21:43.652 3168.00 IOPS, 12.38 MiB/s [2024-11-06T07:57:58.313Z] 3235.50 IOPS, 12.64 MiB/s [2024-11-06T07:57:59.245Z] 3230.67 IOPS, 12.62 MiB/s [2024-11-06T07:58:00.179Z] 3269.50 IOPS, 12.77 MiB/s [2024-11-06T07:58:01.110Z] 3279.00 IOPS, 12.81 MiB/s [2024-11-06T07:58:02.043Z] 3278.33 IOPS, 12.81 MiB/s [2024-11-06T07:58:02.975Z] 3285.00 IOPS, 12.83 MiB/s [2024-11-06T07:58:04.347Z] 3282.62 IOPS, 12.82 MiB/s [2024-11-06T07:58:05.282Z] 3279.89 IOPS, 12.81 MiB/s [2024-11-06T07:58:05.282Z] 3280.40 IOPS, 12.81 MiB/s 00:21:51.993 Latency(us) 00:21:51.993 [2024-11-06T07:58:05.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.993 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:51.993 Verification LBA range: start 0x0 length 0x2000 00:21:51.993 TLSTESTn1 : 10.02 3285.11 12.83 0.00 0.00 38889.66 10631.40 36117.62 00:21:51.993 [2024-11-06T07:58:05.282Z] =================================================================================================================== 00:21:51.993 [2024-11-06T07:58:05.282Z] Total : 3285.11 12.83 0.00 0.00 38889.66 10631.40 36117.62 00:21:51.993 { 00:21:51.993 "results": [ 00:21:51.993 { 00:21:51.993 "job": "TLSTESTn1", 00:21:51.993 "core_mask": "0x4", 00:21:51.993 "workload": "verify", 00:21:51.993 "status": "finished", 00:21:51.993 "verify_range": { 00:21:51.993 "start": 0, 00:21:51.993 "length": 8192 00:21:51.993 }, 00:21:51.993 "queue_depth": 128, 00:21:51.993 "io_size": 4096, 00:21:51.993 "runtime": 10.024323, 00:21:51.993 "iops": 3285.109627852175, 00:21:51.993 "mibps": 12.832459483797559, 00:21:51.993 "io_failed": 0, 00:21:51.993 "io_timeout": 0, 00:21:51.993 "avg_latency_us": 38889.65752850236, 00:21:51.993 "min_latency_us": 10631.395555555555, 00:21:51.993 "max_latency_us": 36117.61777777778 00:21:51.993 } 00:21:51.993 ], 00:21:51.993 "core_count": 1 00:21:51.993 } 00:21:51.993 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.993 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 852166 00:21:51.993 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 852166 ']' 00:21:51.993 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 852166 00:21:51.993 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:51.993 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.993 08:58:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 852166 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 852166' 00:21:51.993 killing process with pid 852166 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 852166 00:21:51.993 Received shutdown signal, test time was about 10.000000 seconds 00:21:51.993 00:21:51.993 Latency(us) 00:21:51.993 [2024-11-06T07:58:05.282Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.993 [2024-11-06T07:58:05.282Z] =================================================================================================================== 00:21:51.993 [2024-11-06T07:58:05.282Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 852166 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 852014 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 852014 ']' 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 852014 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.993 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 852014 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 852014' 00:21:52.253 killing process with pid 852014 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 852014 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 852014 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=853513 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 853513 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 853513 ']' 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.253 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.512 [2024-11-06 08:58:05.572794] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:52.512 [2024-11-06 08:58:05.572918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.512 [2024-11-06 08:58:05.643671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.512 [2024-11-06 08:58:05.693112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.512 [2024-11-06 08:58:05.693190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.512 [2024-11-06 08:58:05.693212] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.512 [2024-11-06 08:58:05.693223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.512 [2024-11-06 08:58:05.693231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.512 [2024-11-06 08:58:05.693769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.512 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:52.769 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:52.769 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:52.769 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:52.769 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.769 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.769 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.4JdmXq92Yv 00:21:52.769 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.4JdmXq92Yv 00:21:52.769 08:58:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:53.027 [2024-11-06 08:58:06.132333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.027 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:53.285 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:53.543 [2024-11-06 08:58:06.665683] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.543 [2024-11-06 08:58:06.665937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.543 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:53.802 malloc0 00:21:53.802 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:54.061 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.4JdmXq92Yv 00:21:54.355 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=853806 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 853806 /var/tmp/bdevperf.sock 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 853806 ']' 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.640 08:58:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.640 [2024-11-06 08:58:07.797236] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:54.640 [2024-11-06 08:58:07.797326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853806 ] 00:21:54.640 [2024-11-06 08:58:07.862559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.640 [2024-11-06 08:58:07.921029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.898 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.898 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:54.898 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4JdmXq92Yv 00:21:55.156 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:55.414 [2024-11-06 08:58:08.555122] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.414 nvme0n1 00:21:55.414 08:58:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.671 Running I/O for 1 seconds... 00:21:56.605 3497.00 IOPS, 13.66 MiB/s 00:21:56.605 Latency(us) 00:21:56.605 [2024-11-06T07:58:09.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.605 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:56.605 Verification LBA range: start 0x0 length 0x2000 00:21:56.605 nvme0n1 : 1.02 3538.31 13.82 0.00 0.00 35796.40 9417.77 31651.46 00:21:56.605 [2024-11-06T07:58:09.894Z] =================================================================================================================== 00:21:56.605 [2024-11-06T07:58:09.894Z] Total : 3538.31 13.82 0.00 0.00 35796.40 9417.77 31651.46 00:21:56.605 { 00:21:56.605 "results": [ 00:21:56.605 { 00:21:56.605 "job": "nvme0n1", 00:21:56.605 "core_mask": "0x2", 00:21:56.605 "workload": "verify", 00:21:56.605 "status": "finished", 00:21:56.605 "verify_range": { 00:21:56.605 "start": 0, 00:21:56.605 "length": 8192 00:21:56.605 }, 00:21:56.605 "queue_depth": 128, 00:21:56.605 "io_size": 4096, 00:21:56.605 "runtime": 1.0245, 00:21:56.605 "iops": 3538.311371400683, 00:21:56.605 "mibps": 13.821528794533918, 00:21:56.605 "io_failed": 0, 00:21:56.605 "io_timeout": 0, 00:21:56.605 "avg_latency_us": 35796.402657266925, 00:21:56.605 "min_latency_us": 9417.765925925925, 00:21:56.605 "max_latency_us": 31651.460740740742 00:21:56.605 } 00:21:56.605 ], 00:21:56.605 "core_count": 1 00:21:56.605 } 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 853806 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 853806 ']' 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 853806 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 853806 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 853806' 00:21:56.605 killing process with pid 853806 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 853806 00:21:56.605 Received shutdown signal, test time was about 1.000000 seconds 00:21:56.605 00:21:56.605 Latency(us) 00:21:56.605 [2024-11-06T07:58:09.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.605 [2024-11-06T07:58:09.894Z] =================================================================================================================== 00:21:56.605 [2024-11-06T07:58:09.894Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.605 08:58:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 853806 00:21:56.863 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 853513 00:21:56.863 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 853513 ']' 00:21:56.863 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 853513 00:21:56.863 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:56.863 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:56.863 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 853513 00:21:56.863 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:56.863 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:56.864 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 853513' 00:21:56.864 killing process with pid 853513 00:21:56.864 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 853513 00:21:56.864 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 853513 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=854085 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 854085 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 854085 ']' 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.124 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.124 [2024-11-06 08:58:10.406587] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:57.124 [2024-11-06 08:58:10.406682] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.383 [2024-11-06 08:58:10.479980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.383 [2024-11-06 08:58:10.532730] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.383 [2024-11-06 08:58:10.532790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.383 [2024-11-06 08:58:10.532804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.383 [2024-11-06 08:58:10.532826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.383 [2024-11-06 08:58:10.532855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.383 [2024-11-06 08:58:10.533411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.383 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.383 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:57.383 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:57.383 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:57.383 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.642 [2024-11-06 08:58:10.680265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.642 malloc0 00:21:57.642 [2024-11-06 08:58:10.711795] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:57.642 [2024-11-06 08:58:10.712063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=854226 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 854226 /var/tmp/bdevperf.sock 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 854226 ']' 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.642 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.642 [2024-11-06 08:58:10.783407] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:21:57.642 [2024-11-06 08:58:10.783479] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854226 ] 00:21:57.642 [2024-11-06 08:58:10.848871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.642 [2024-11-06 08:58:10.906416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.899 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.899 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:57.899 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4JdmXq92Yv 00:21:58.157 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:58.415 [2024-11-06 08:58:11.538884] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.415 nvme0n1 00:21:58.415 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:58.673 Running I/O for 1 seconds... 00:21:59.607 3436.00 IOPS, 13.42 MiB/s 00:21:59.607 Latency(us) 00:21:59.607 [2024-11-06T07:58:12.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.607 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:59.607 Verification LBA range: start 0x0 length 0x2000 00:21:59.607 nvme0n1 : 1.02 3497.53 13.66 0.00 0.00 36266.54 7767.23 32234.00 00:21:59.607 [2024-11-06T07:58:12.896Z] =================================================================================================================== 00:21:59.607 [2024-11-06T07:58:12.896Z] Total : 3497.53 13.66 0.00 0.00 36266.54 7767.23 32234.00 00:21:59.607 { 00:21:59.607 "results": [ 00:21:59.607 { 00:21:59.607 "job": "nvme0n1", 00:21:59.607 "core_mask": "0x2", 00:21:59.607 "workload": "verify", 00:21:59.607 "status": "finished", 00:21:59.607 "verify_range": { 00:21:59.607 "start": 0, 00:21:59.607 "length": 8192 00:21:59.607 }, 00:21:59.607 "queue_depth": 128, 00:21:59.607 "io_size": 4096, 00:21:59.607 "runtime": 1.01929, 00:21:59.607 "iops": 3497.5325962189368, 00:21:59.607 "mibps": 13.662236703980222, 00:21:59.607 "io_failed": 0, 00:21:59.607 "io_timeout": 0, 00:21:59.607 "avg_latency_us": 36266.5418781362, 00:21:59.607 "min_latency_us": 7767.22962962963, 00:21:59.607 "max_latency_us": 32234.002962962964 00:21:59.607 } 00:21:59.607 ], 00:21:59.607 "core_count": 1 00:21:59.607 } 00:21:59.607 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:59.607 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.607 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.607 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.607 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:59.607 "subsystems": [ 00:21:59.607 { 00:21:59.607 "subsystem": "keyring", 00:21:59.607 "config": [ 00:21:59.607 { 00:21:59.607 "method": "keyring_file_add_key", 00:21:59.607 "params": { 00:21:59.607 "name": "key0", 00:21:59.607 "path": "/tmp/tmp.4JdmXq92Yv" 00:21:59.607 } 00:21:59.607 } 00:21:59.607 ] 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "subsystem": "iobuf", 00:21:59.607 "config": [ 00:21:59.607 { 00:21:59.607 "method": "iobuf_set_options", 00:21:59.607 "params": { 00:21:59.607 "small_pool_count": 8192, 00:21:59.607 "large_pool_count": 1024, 00:21:59.607 "small_bufsize": 8192, 00:21:59.607 "large_bufsize": 135168, 00:21:59.607 "enable_numa": false 00:21:59.607 } 00:21:59.607 } 00:21:59.607 ] 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "subsystem": "sock", 00:21:59.607 "config": [ 00:21:59.607 { 00:21:59.607 "method": "sock_set_default_impl", 00:21:59.607 "params": { 00:21:59.607 "impl_name": "posix" 00:21:59.607 } 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "method": "sock_impl_set_options", 00:21:59.607 "params": { 00:21:59.607 "impl_name": "ssl", 00:21:59.607 "recv_buf_size": 4096, 00:21:59.607 "send_buf_size": 4096, 00:21:59.607 "enable_recv_pipe": true, 00:21:59.607 "enable_quickack": false, 00:21:59.607 "enable_placement_id": 0, 00:21:59.607 "enable_zerocopy_send_server": true, 00:21:59.607 "enable_zerocopy_send_client": false, 00:21:59.607 "zerocopy_threshold": 0, 00:21:59.607 "tls_version": 0, 00:21:59.607 "enable_ktls": false 00:21:59.607 } 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "method": "sock_impl_set_options", 00:21:59.607 "params": { 00:21:59.607 "impl_name": "posix", 00:21:59.607 "recv_buf_size": 2097152, 00:21:59.607 "send_buf_size": 2097152, 00:21:59.607 "enable_recv_pipe": true, 00:21:59.607 "enable_quickack": false, 00:21:59.607 "enable_placement_id": 0, 00:21:59.607 "enable_zerocopy_send_server": true, 00:21:59.607 "enable_zerocopy_send_client": false, 00:21:59.607 "zerocopy_threshold": 0, 00:21:59.607 "tls_version": 0, 00:21:59.607 "enable_ktls": false 00:21:59.607 } 00:21:59.607 } 00:21:59.607 ] 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "subsystem": "vmd", 00:21:59.607 "config": [] 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "subsystem": "accel", 00:21:59.607 "config": [ 00:21:59.607 { 00:21:59.607 "method": "accel_set_options", 00:21:59.607 "params": { 00:21:59.607 "small_cache_size": 128, 00:21:59.607 "large_cache_size": 16, 00:21:59.607 "task_count": 2048, 00:21:59.607 "sequence_count": 2048, 00:21:59.607 "buf_count": 2048 00:21:59.607 } 00:21:59.607 } 00:21:59.607 ] 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "subsystem": "bdev", 00:21:59.607 "config": [ 00:21:59.607 { 00:21:59.607 "method": "bdev_set_options", 00:21:59.607 "params": { 00:21:59.607 "bdev_io_pool_size": 65535, 00:21:59.607 "bdev_io_cache_size": 256, 00:21:59.607 "bdev_auto_examine": true, 00:21:59.607 "iobuf_small_cache_size": 128, 00:21:59.607 "iobuf_large_cache_size": 16 00:21:59.607 } 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "method": "bdev_raid_set_options", 00:21:59.607 "params": { 00:21:59.607 "process_window_size_kb": 1024, 00:21:59.607 "process_max_bandwidth_mb_sec": 0 00:21:59.607 } 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "method": "bdev_iscsi_set_options", 00:21:59.607 "params": { 00:21:59.607 "timeout_sec": 30 00:21:59.607 } 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "method": "bdev_nvme_set_options", 00:21:59.607 "params": { 00:21:59.607 "action_on_timeout": "none", 00:21:59.607 "timeout_us": 0, 00:21:59.607 "timeout_admin_us": 0, 00:21:59.607 "keep_alive_timeout_ms": 10000, 00:21:59.607 "arbitration_burst": 0, 00:21:59.607 "low_priority_weight": 0, 00:21:59.607 "medium_priority_weight": 0, 00:21:59.607 "high_priority_weight": 0, 00:21:59.607 "nvme_adminq_poll_period_us": 10000, 00:21:59.607 "nvme_ioq_poll_period_us": 0, 00:21:59.607 "io_queue_requests": 0, 00:21:59.607 "delay_cmd_submit": true, 00:21:59.607 "transport_retry_count": 4, 00:21:59.607 "bdev_retry_count": 3, 00:21:59.607 "transport_ack_timeout": 0, 00:21:59.607 "ctrlr_loss_timeout_sec": 0, 00:21:59.607 "reconnect_delay_sec": 0, 00:21:59.607 "fast_io_fail_timeout_sec": 0, 00:21:59.607 "disable_auto_failback": false, 00:21:59.607 "generate_uuids": false, 00:21:59.607 "transport_tos": 0, 00:21:59.607 "nvme_error_stat": false, 00:21:59.607 "rdma_srq_size": 0, 00:21:59.607 "io_path_stat": false, 00:21:59.607 "allow_accel_sequence": false, 00:21:59.607 "rdma_max_cq_size": 0, 00:21:59.607 "rdma_cm_event_timeout_ms": 0, 00:21:59.607 "dhchap_digests": [ 00:21:59.607 "sha256", 00:21:59.607 "sha384", 00:21:59.607 "sha512" 00:21:59.607 ], 00:21:59.607 "dhchap_dhgroups": [ 00:21:59.607 "null", 00:21:59.607 "ffdhe2048", 00:21:59.607 "ffdhe3072", 00:21:59.607 "ffdhe4096", 00:21:59.607 "ffdhe6144", 00:21:59.607 "ffdhe8192" 00:21:59.607 ] 00:21:59.607 } 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "method": "bdev_nvme_set_hotplug", 00:21:59.607 "params": { 00:21:59.607 "period_us": 100000, 00:21:59.607 "enable": false 00:21:59.607 } 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "method": "bdev_malloc_create", 00:21:59.607 "params": { 00:21:59.607 "name": "malloc0", 00:21:59.607 "num_blocks": 8192, 00:21:59.607 "block_size": 4096, 00:21:59.607 "physical_block_size": 4096, 00:21:59.607 "uuid": "25792b06-b2ae-4e16-b060-7c7487979f97", 00:21:59.607 "optimal_io_boundary": 0, 00:21:59.607 "md_size": 0, 00:21:59.607 "dif_type": 0, 00:21:59.607 "dif_is_head_of_md": false, 00:21:59.607 "dif_pi_format": 0 00:21:59.607 } 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "method": "bdev_wait_for_examine" 00:21:59.607 } 00:21:59.607 ] 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "subsystem": "nbd", 00:21:59.607 "config": [] 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "subsystem": "scheduler", 00:21:59.607 "config": [ 00:21:59.607 { 00:21:59.607 "method": "framework_set_scheduler", 00:21:59.607 "params": { 00:21:59.607 "name": "static" 00:21:59.607 } 00:21:59.607 } 00:21:59.607 ] 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "subsystem": "nvmf", 00:21:59.607 "config": [ 00:21:59.607 { 00:21:59.607 "method": "nvmf_set_config", 00:21:59.607 "params": { 00:21:59.607 "discovery_filter": "match_any", 00:21:59.607 "admin_cmd_passthru": { 00:21:59.607 "identify_ctrlr": false 00:21:59.607 }, 00:21:59.607 "dhchap_digests": [ 00:21:59.607 "sha256", 00:21:59.607 "sha384", 00:21:59.607 "sha512" 00:21:59.607 ], 00:21:59.607 "dhchap_dhgroups": [ 00:21:59.607 "null", 00:21:59.607 "ffdhe2048", 00:21:59.607 "ffdhe3072", 00:21:59.607 "ffdhe4096", 00:21:59.607 "ffdhe6144", 00:21:59.607 "ffdhe8192" 00:21:59.607 ] 00:21:59.607 } 00:21:59.607 }, 00:21:59.607 { 00:21:59.607 "method": "nvmf_set_max_subsystems", 00:21:59.607 "params": { 00:21:59.607 "max_subsystems": 1024 00:21:59.607 } 00:21:59.608 }, 00:21:59.608 { 00:21:59.608 "method": "nvmf_set_crdt", 00:21:59.608 "params": { 00:21:59.608 "crdt1": 0, 00:21:59.608 "crdt2": 0, 00:21:59.608 "crdt3": 0 00:21:59.608 } 00:21:59.608 }, 00:21:59.608 { 00:21:59.608 "method": "nvmf_create_transport", 00:21:59.608 "params": { 00:21:59.608 "trtype": "TCP", 00:21:59.608 "max_queue_depth": 128, 00:21:59.608 "max_io_qpairs_per_ctrlr": 127, 00:21:59.608 "in_capsule_data_size": 4096, 00:21:59.608 "max_io_size": 131072, 00:21:59.608 "io_unit_size": 131072, 00:21:59.608 "max_aq_depth": 128, 00:21:59.608 "num_shared_buffers": 511, 00:21:59.608 "buf_cache_size": 4294967295, 00:21:59.608 "dif_insert_or_strip": false, 00:21:59.608 "zcopy": false, 00:21:59.608 "c2h_success": false, 00:21:59.608 "sock_priority": 0, 00:21:59.608 "abort_timeout_sec": 1, 00:21:59.608 "ack_timeout": 0, 00:21:59.608 "data_wr_pool_size": 0 00:21:59.608 } 00:21:59.608 }, 00:21:59.608 { 00:21:59.608 "method": "nvmf_create_subsystem", 00:21:59.608 "params": { 00:21:59.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.608 "allow_any_host": false, 00:21:59.608 "serial_number": "00000000000000000000", 00:21:59.608 "model_number": "SPDK bdev Controller", 00:21:59.608 "max_namespaces": 32, 00:21:59.608 "min_cntlid": 1, 00:21:59.608 "max_cntlid": 65519, 00:21:59.608 "ana_reporting": false 00:21:59.608 } 00:21:59.608 }, 00:21:59.608 { 00:21:59.608 "method": "nvmf_subsystem_add_host", 00:21:59.608 "params": { 00:21:59.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.608 "host": "nqn.2016-06.io.spdk:host1", 00:21:59.608 "psk": "key0" 00:21:59.608 } 00:21:59.608 }, 00:21:59.608 { 00:21:59.608 "method": "nvmf_subsystem_add_ns", 00:21:59.608 "params": { 00:21:59.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.608 "namespace": { 00:21:59.608 "nsid": 1, 00:21:59.608 "bdev_name": "malloc0", 00:21:59.608 "nguid": "25792B06B2AE4E16B0607C7487979F97", 00:21:59.608 "uuid": "25792b06-b2ae-4e16-b060-7c7487979f97", 00:21:59.608 "no_auto_visible": false 00:21:59.608 } 00:21:59.608 } 00:21:59.608 }, 00:21:59.608 { 00:21:59.608 "method": "nvmf_subsystem_add_listener", 00:21:59.608 "params": { 00:21:59.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.608 "listen_address": { 00:21:59.608 "trtype": "TCP", 00:21:59.608 "adrfam": "IPv4", 00:21:59.608 "traddr": "10.0.0.2", 00:21:59.608 "trsvcid": "4420" 00:21:59.608 }, 00:21:59.608 "secure_channel": false, 00:21:59.608 "sock_impl": "ssl" 00:21:59.608 } 00:21:59.608 } 00:21:59.608 ] 00:21:59.608 } 00:21:59.608 ] 00:21:59.608 }' 00:21:59.608 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:00.173 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:00.173 "subsystems": [ 00:22:00.173 { 00:22:00.173 "subsystem": "keyring", 00:22:00.173 "config": [ 00:22:00.173 { 00:22:00.173 "method": "keyring_file_add_key", 00:22:00.173 "params": { 00:22:00.173 "name": "key0", 00:22:00.173 "path": "/tmp/tmp.4JdmXq92Yv" 00:22:00.173 } 00:22:00.173 } 00:22:00.173 ] 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "subsystem": "iobuf", 00:22:00.173 "config": [ 00:22:00.173 { 00:22:00.173 "method": "iobuf_set_options", 00:22:00.173 "params": { 00:22:00.173 "small_pool_count": 8192, 00:22:00.173 "large_pool_count": 1024, 00:22:00.173 "small_bufsize": 8192, 00:22:00.173 "large_bufsize": 135168, 00:22:00.173 "enable_numa": false 00:22:00.173 } 00:22:00.173 } 00:22:00.173 ] 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "subsystem": "sock", 00:22:00.173 "config": [ 00:22:00.173 { 00:22:00.173 "method": "sock_set_default_impl", 00:22:00.173 "params": { 00:22:00.173 "impl_name": "posix" 00:22:00.173 } 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "method": "sock_impl_set_options", 00:22:00.173 "params": { 00:22:00.173 "impl_name": "ssl", 00:22:00.173 "recv_buf_size": 4096, 00:22:00.173 "send_buf_size": 4096, 00:22:00.173 "enable_recv_pipe": true, 00:22:00.173 "enable_quickack": false, 00:22:00.173 "enable_placement_id": 0, 00:22:00.173 "enable_zerocopy_send_server": true, 00:22:00.173 "enable_zerocopy_send_client": false, 00:22:00.173 "zerocopy_threshold": 0, 00:22:00.173 "tls_version": 0, 00:22:00.173 "enable_ktls": false 00:22:00.173 } 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "method": "sock_impl_set_options", 00:22:00.173 "params": { 00:22:00.173 "impl_name": "posix", 00:22:00.173 "recv_buf_size": 2097152, 00:22:00.173 "send_buf_size": 2097152, 00:22:00.173 "enable_recv_pipe": true, 00:22:00.173 "enable_quickack": false, 00:22:00.173 "enable_placement_id": 0, 00:22:00.173 "enable_zerocopy_send_server": true, 00:22:00.173 "enable_zerocopy_send_client": false, 00:22:00.173 "zerocopy_threshold": 0, 00:22:00.173 "tls_version": 0, 00:22:00.173 "enable_ktls": false 00:22:00.173 } 00:22:00.173 } 00:22:00.173 ] 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "subsystem": "vmd", 00:22:00.173 "config": [] 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "subsystem": "accel", 00:22:00.173 "config": [ 00:22:00.173 { 00:22:00.173 "method": "accel_set_options", 00:22:00.173 "params": { 00:22:00.173 "small_cache_size": 128, 00:22:00.173 "large_cache_size": 16, 00:22:00.173 "task_count": 2048, 00:22:00.173 "sequence_count": 2048, 00:22:00.173 "buf_count": 2048 00:22:00.173 } 00:22:00.173 } 00:22:00.173 ] 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "subsystem": "bdev", 00:22:00.173 "config": [ 00:22:00.173 { 00:22:00.173 "method": "bdev_set_options", 00:22:00.173 "params": { 00:22:00.173 "bdev_io_pool_size": 65535, 00:22:00.173 "bdev_io_cache_size": 256, 00:22:00.173 "bdev_auto_examine": true, 00:22:00.173 "iobuf_small_cache_size": 128, 00:22:00.173 "iobuf_large_cache_size": 16 00:22:00.173 } 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "method": "bdev_raid_set_options", 00:22:00.173 "params": { 00:22:00.173 "process_window_size_kb": 1024, 00:22:00.173 "process_max_bandwidth_mb_sec": 0 00:22:00.173 } 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "method": "bdev_iscsi_set_options", 00:22:00.173 "params": { 00:22:00.173 "timeout_sec": 30 00:22:00.173 } 00:22:00.173 }, 00:22:00.173 { 00:22:00.173 "method": "bdev_nvme_set_options", 00:22:00.173 "params": { 00:22:00.173 "action_on_timeout": "none", 00:22:00.173 "timeout_us": 0, 00:22:00.173 "timeout_admin_us": 0, 00:22:00.173 "keep_alive_timeout_ms": 10000, 00:22:00.173 "arbitration_burst": 0, 00:22:00.173 "low_priority_weight": 0, 00:22:00.173 "medium_priority_weight": 0, 00:22:00.173 "high_priority_weight": 0, 00:22:00.173 "nvme_adminq_poll_period_us": 10000, 00:22:00.173 "nvme_ioq_poll_period_us": 0, 00:22:00.173 "io_queue_requests": 512, 00:22:00.173 "delay_cmd_submit": true, 00:22:00.173 "transport_retry_count": 4, 00:22:00.173 "bdev_retry_count": 3, 00:22:00.173 "transport_ack_timeout": 0, 00:22:00.173 "ctrlr_loss_timeout_sec": 0, 00:22:00.173 "reconnect_delay_sec": 0, 00:22:00.173 "fast_io_fail_timeout_sec": 0, 00:22:00.173 "disable_auto_failback": false, 00:22:00.173 "generate_uuids": false, 00:22:00.173 "transport_tos": 0, 00:22:00.173 "nvme_error_stat": false, 00:22:00.173 "rdma_srq_size": 0, 00:22:00.173 "io_path_stat": false, 00:22:00.173 "allow_accel_sequence": false, 00:22:00.173 "rdma_max_cq_size": 0, 00:22:00.173 "rdma_cm_event_timeout_ms": 0, 00:22:00.173 "dhchap_digests": [ 00:22:00.173 "sha256", 00:22:00.173 "sha384", 00:22:00.173 "sha512" 00:22:00.173 ], 00:22:00.173 "dhchap_dhgroups": [ 00:22:00.173 "null", 00:22:00.173 "ffdhe2048", 00:22:00.173 "ffdhe3072", 00:22:00.173 "ffdhe4096", 00:22:00.173 "ffdhe6144", 00:22:00.173 "ffdhe8192" 00:22:00.173 ] 00:22:00.173 } 00:22:00.173 }, 00:22:00.174 { 00:22:00.174 "method": "bdev_nvme_attach_controller", 00:22:00.174 "params": { 00:22:00.174 "name": "nvme0", 00:22:00.174 "trtype": "TCP", 00:22:00.174 "adrfam": "IPv4", 00:22:00.174 "traddr": "10.0.0.2", 00:22:00.174 "trsvcid": "4420", 00:22:00.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.174 "prchk_reftag": false, 00:22:00.174 "prchk_guard": false, 00:22:00.174 "ctrlr_loss_timeout_sec": 0, 00:22:00.174 "reconnect_delay_sec": 0, 00:22:00.174 "fast_io_fail_timeout_sec": 0, 00:22:00.174 "psk": "key0", 00:22:00.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.174 "hdgst": false, 00:22:00.174 "ddgst": false, 00:22:00.174 "multipath": "multipath" 00:22:00.174 } 00:22:00.174 }, 00:22:00.174 { 00:22:00.174 "method": "bdev_nvme_set_hotplug", 00:22:00.174 "params": { 00:22:00.174 "period_us": 100000, 00:22:00.174 "enable": false 00:22:00.174 } 00:22:00.174 }, 00:22:00.174 { 00:22:00.174 "method": "bdev_enable_histogram", 00:22:00.174 "params": { 00:22:00.174 "name": "nvme0n1", 00:22:00.174 "enable": true 00:22:00.174 } 00:22:00.174 }, 00:22:00.174 { 00:22:00.174 "method": "bdev_wait_for_examine" 00:22:00.174 } 00:22:00.174 ] 00:22:00.174 }, 00:22:00.174 { 00:22:00.174 "subsystem": "nbd", 00:22:00.174 "config": [] 00:22:00.174 } 00:22:00.174 ] 00:22:00.174 }' 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 854226 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 854226 ']' 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 854226 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854226 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854226' 00:22:00.174 killing process with pid 854226 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 854226 00:22:00.174 Received shutdown signal, test time was about 1.000000 seconds 00:22:00.174 00:22:00.174 Latency(us) 00:22:00.174 [2024-11-06T07:58:13.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.174 [2024-11-06T07:58:13.463Z] =================================================================================================================== 00:22:00.174 [2024-11-06T07:58:13.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.174 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 854226 00:22:00.431 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 854085 00:22:00.431 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 854085 ']' 00:22:00.432 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 854085 00:22:00.432 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:00.432 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.432 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854085 00:22:00.432 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:00.432 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:00.432 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854085' 00:22:00.432 killing process with pid 854085 00:22:00.432 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 854085 00:22:00.432 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 854085 00:22:00.690 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:00.690 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:00.690 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:00.690 "subsystems": [ 00:22:00.690 { 00:22:00.690 "subsystem": "keyring", 00:22:00.690 "config": [ 00:22:00.690 { 00:22:00.690 "method": "keyring_file_add_key", 00:22:00.690 "params": { 00:22:00.690 "name": "key0", 00:22:00.690 "path": "/tmp/tmp.4JdmXq92Yv" 00:22:00.690 } 00:22:00.690 } 00:22:00.690 ] 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "subsystem": "iobuf", 00:22:00.690 "config": [ 00:22:00.690 { 00:22:00.690 "method": "iobuf_set_options", 00:22:00.690 "params": { 00:22:00.690 "small_pool_count": 8192, 00:22:00.690 "large_pool_count": 1024, 00:22:00.690 "small_bufsize": 8192, 00:22:00.690 "large_bufsize": 135168, 00:22:00.690 "enable_numa": false 00:22:00.690 } 00:22:00.690 } 00:22:00.690 ] 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "subsystem": "sock", 00:22:00.690 "config": [ 00:22:00.690 { 00:22:00.690 "method": "sock_set_default_impl", 00:22:00.690 "params": { 00:22:00.690 "impl_name": "posix" 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "sock_impl_set_options", 00:22:00.690 "params": { 00:22:00.690 "impl_name": "ssl", 00:22:00.690 "recv_buf_size": 4096, 00:22:00.690 "send_buf_size": 4096, 00:22:00.690 "enable_recv_pipe": true, 00:22:00.690 "enable_quickack": false, 00:22:00.690 "enable_placement_id": 0, 00:22:00.690 "enable_zerocopy_send_server": true, 00:22:00.690 "enable_zerocopy_send_client": false, 00:22:00.690 "zerocopy_threshold": 0, 00:22:00.690 "tls_version": 0, 00:22:00.690 "enable_ktls": false 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "sock_impl_set_options", 00:22:00.690 "params": { 00:22:00.690 "impl_name": "posix", 00:22:00.690 "recv_buf_size": 2097152, 00:22:00.690 "send_buf_size": 2097152, 00:22:00.690 "enable_recv_pipe": true, 00:22:00.690 "enable_quickack": false, 00:22:00.690 "enable_placement_id": 0, 00:22:00.690 "enable_zerocopy_send_server": true, 00:22:00.690 "enable_zerocopy_send_client": false, 00:22:00.690 "zerocopy_threshold": 0, 00:22:00.690 "tls_version": 0, 00:22:00.690 "enable_ktls": false 00:22:00.690 } 00:22:00.690 } 00:22:00.690 ] 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "subsystem": "vmd", 00:22:00.690 "config": [] 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "subsystem": "accel", 00:22:00.690 "config": [ 00:22:00.690 { 00:22:00.690 "method": "accel_set_options", 00:22:00.690 "params": { 00:22:00.690 "small_cache_size": 128, 00:22:00.690 "large_cache_size": 16, 00:22:00.690 "task_count": 2048, 00:22:00.690 "sequence_count": 2048, 00:22:00.690 "buf_count": 2048 00:22:00.690 } 00:22:00.690 } 00:22:00.690 ] 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "subsystem": "bdev", 00:22:00.690 "config": [ 00:22:00.690 { 00:22:00.690 "method": "bdev_set_options", 00:22:00.690 "params": { 00:22:00.690 "bdev_io_pool_size": 65535, 00:22:00.690 "bdev_io_cache_size": 256, 00:22:00.690 "bdev_auto_examine": true, 00:22:00.690 "iobuf_small_cache_size": 128, 00:22:00.690 "iobuf_large_cache_size": 16 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "bdev_raid_set_options", 00:22:00.690 "params": { 00:22:00.690 "process_window_size_kb": 1024, 00:22:00.690 "process_max_bandwidth_mb_sec": 0 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "bdev_iscsi_set_options", 00:22:00.690 "params": { 00:22:00.690 "timeout_sec": 30 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "bdev_nvme_set_options", 00:22:00.690 "params": { 00:22:00.690 "action_on_timeout": "none", 00:22:00.690 "timeout_us": 0, 00:22:00.690 "timeout_admin_us": 0, 00:22:00.690 "keep_alive_timeout_ms": 10000, 00:22:00.690 "arbitration_burst": 0, 00:22:00.690 "low_priority_weight": 0, 00:22:00.690 "medium_priority_weight": 0, 00:22:00.690 "high_priority_weight": 0, 00:22:00.690 "nvme_adminq_poll_period_us": 10000, 00:22:00.690 "nvme_ioq_poll_period_us": 0, 00:22:00.690 "io_queue_requests": 0, 00:22:00.690 "delay_cmd_submit": true, 00:22:00.690 "transport_retry_count": 4, 00:22:00.690 "bdev_retry_count": 3, 00:22:00.690 "transport_ack_timeout": 0, 00:22:00.690 "ctrlr_loss_timeout_sec": 0, 00:22:00.690 "reconnect_delay_sec": 0, 00:22:00.690 "fast_io_fail_timeout_sec": 0, 00:22:00.690 "disable_auto_failback": false, 00:22:00.690 "generate_uuids": false, 00:22:00.690 "transport_tos": 0, 00:22:00.690 "nvme_error_stat": false, 00:22:00.690 "rdma_srq_size": 0, 00:22:00.690 "io_path_stat": false, 00:22:00.690 "allow_accel_sequence": false, 00:22:00.690 "rdma_max_cq_size": 0, 00:22:00.690 "rdma_cm_event_timeout_ms": 0, 00:22:00.690 "dhchap_digests": [ 00:22:00.690 "sha256", 00:22:00.690 "sha384", 00:22:00.690 "sha512" 00:22:00.690 ], 00:22:00.690 "dhchap_dhgroups": [ 00:22:00.690 "null", 00:22:00.690 "ffdhe2048", 00:22:00.690 "ffdhe3072", 00:22:00.690 "ffdhe4096", 00:22:00.690 "ffdhe6144", 00:22:00.690 "ffdhe8192" 00:22:00.690 ] 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "bdev_nvme_set_hotplug", 00:22:00.690 "params": { 00:22:00.690 "period_us": 100000, 00:22:00.690 "enable": false 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "bdev_malloc_create", 00:22:00.690 "params": { 00:22:00.690 "name": "malloc0", 00:22:00.690 "num_blocks": 8192, 00:22:00.690 "block_size": 4096, 00:22:00.690 "physical_block_size": 4096, 00:22:00.690 "uuid": "25792b06-b2ae-4e16-b060-7c7487979f97", 00:22:00.690 "optimal_io_boundary": 0, 00:22:00.690 "md_size": 0, 00:22:00.690 "dif_type": 0, 00:22:00.690 "dif_is_head_of_md": false, 00:22:00.690 "dif_pi_format": 0 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "bdev_wait_for_examine" 00:22:00.690 } 00:22:00.690 ] 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "subsystem": "nbd", 00:22:00.690 "config": [] 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "subsystem": "scheduler", 00:22:00.690 "config": [ 00:22:00.690 { 00:22:00.690 "method": "framework_set_scheduler", 00:22:00.690 "params": { 00:22:00.690 "name": "static" 00:22:00.690 } 00:22:00.690 } 00:22:00.690 ] 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "subsystem": "nvmf", 00:22:00.690 "config": [ 00:22:00.690 { 00:22:00.690 "method": "nvmf_set_config", 00:22:00.690 "params": { 00:22:00.690 "discovery_filter": "match_any", 00:22:00.690 "admin_cmd_passthru": { 00:22:00.690 "identify_ctrlr": false 00:22:00.690 }, 00:22:00.690 "dhchap_digests": [ 00:22:00.690 "sha256", 00:22:00.690 "sha384", 00:22:00.690 "sha512" 00:22:00.690 ], 00:22:00.690 "dhchap_dhgroups": [ 00:22:00.690 "null", 00:22:00.690 "ffdhe2048", 00:22:00.690 "ffdhe3072", 00:22:00.690 "ffdhe4096", 00:22:00.690 "ffdhe6144", 00:22:00.690 "ffdhe8192" 00:22:00.690 ] 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "nvmf_set_max_subsystems", 00:22:00.690 "params": { 00:22:00.690 "max_subsystems": 1024 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "nvmf_set_crdt", 00:22:00.690 "params": { 00:22:00.690 "crdt1": 0, 00:22:00.690 "crdt2": 0, 00:22:00.690 "crdt3": 0 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "nvmf_create_transport", 00:22:00.690 "params": { 00:22:00.690 "trtype": "TCP", 00:22:00.690 "max_queue_depth": 128, 00:22:00.690 "max_io_qpairs_per_ctrlr": 127, 00:22:00.690 "in_capsule_data_size": 4096, 00:22:00.690 "max_io_size": 131072, 00:22:00.690 "io_unit_size": 131072, 00:22:00.690 "max_aq_depth": 128, 00:22:00.690 "num_shared_buffers": 511, 00:22:00.690 "buf_cache_size": 4294967295, 00:22:00.690 "dif_insert_or_strip": false, 00:22:00.690 "zcopy": false, 00:22:00.690 "c2h_success": false, 00:22:00.690 "sock_priority": 0, 00:22:00.690 "abort_timeout_sec": 1, 00:22:00.690 "ack_timeout": 0, 00:22:00.690 "data_wr_pool_size": 0 00:22:00.690 } 00:22:00.690 }, 00:22:00.690 { 00:22:00.690 "method": "nvmf_create_subsystem", 00:22:00.690 "params": { 00:22:00.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.690 "allow_any_host": false, 00:22:00.690 "serial_number": "00000000000000000000", 00:22:00.690 "model_number": "SPDK bdev Controller", 00:22:00.690 "max_namespaces": 32, 00:22:00.691 "min_cntlid": 1, 00:22:00.691 "max_cntlid": 65519, 00:22:00.691 "ana_reporting": false 00:22:00.691 } 00:22:00.691 }, 00:22:00.691 { 00:22:00.691 "method": "nvmf_subsystem_add_host", 00:22:00.691 "params": { 00:22:00.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.691 "host": "nqn.2016-06.io.spdk:host1", 00:22:00.691 "psk": "key0" 00:22:00.691 } 00:22:00.691 }, 00:22:00.691 { 00:22:00.691 "method": "nvmf_subsystem_add_ns", 00:22:00.691 "params": { 00:22:00.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.691 "namespace": { 00:22:00.691 "nsid": 1, 00:22:00.691 "bdev_name": "malloc0", 00:22:00.691 "nguid": "25792B06B2AE4E16B0607C7487979F97", 00:22:00.691 "uuid": "25792b06-b2ae-4e16-b060-7c7487979f97", 00:22:00.691 "no_auto_visible": false 00:22:00.691 } 00:22:00.691 } 00:22:00.691 }, 00:22:00.691 { 00:22:00.691 "method": "nvmf_subsystem_add_listener", 00:22:00.691 "params": { 00:22:00.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.691 "listen_address": { 00:22:00.691 "trtype": "TCP", 00:22:00.691 "adrfam": "IPv4", 00:22:00.691 "traddr": "10.0.0.2", 00:22:00.691 "trsvcid": "4420" 00:22:00.691 }, 00:22:00.691 "secure_channel": false, 00:22:00.691 "sock_impl": "ssl" 00:22:00.691 } 00:22:00.691 } 00:22:00.691 ] 00:22:00.691 } 00:22:00.691 ] 00:22:00.691 }' 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=854521 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 854521 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 854521 ']' 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:00.691 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.691 [2024-11-06 08:58:13.809376] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:22:00.691 [2024-11-06 08:58:13.809484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.691 [2024-11-06 08:58:13.879163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.691 [2024-11-06 08:58:13.930604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.691 [2024-11-06 08:58:13.930665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.691 [2024-11-06 08:58:13.930688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.691 [2024-11-06 08:58:13.930698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.691 [2024-11-06 08:58:13.930708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.691 [2024-11-06 08:58:13.931396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.949 [2024-11-06 08:58:14.174224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.949 [2024-11-06 08:58:14.206265] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:00.949 [2024-11-06 08:58:14.206474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.514 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.514 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:01.514 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:01.514 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.514 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.772 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.772 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=854672 00:22:01.772 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 854672 /var/tmp/bdevperf.sock 00:22:01.772 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 854672 ']' 00:22:01.772 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.772 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:01.772 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.772 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.772 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:01.772 "subsystems": [ 00:22:01.772 { 00:22:01.772 "subsystem": "keyring", 00:22:01.772 "config": [ 00:22:01.772 { 00:22:01.772 "method": "keyring_file_add_key", 00:22:01.772 "params": { 00:22:01.772 "name": "key0", 00:22:01.772 "path": "/tmp/tmp.4JdmXq92Yv" 00:22:01.772 } 00:22:01.772 } 00:22:01.772 ] 00:22:01.772 }, 00:22:01.772 { 00:22:01.772 "subsystem": "iobuf", 00:22:01.772 "config": [ 00:22:01.772 { 00:22:01.772 "method": "iobuf_set_options", 00:22:01.772 "params": { 00:22:01.772 "small_pool_count": 8192, 00:22:01.772 "large_pool_count": 1024, 00:22:01.772 "small_bufsize": 8192, 00:22:01.772 "large_bufsize": 135168, 00:22:01.772 "enable_numa": false 00:22:01.772 } 00:22:01.772 } 00:22:01.772 ] 00:22:01.772 }, 00:22:01.772 { 00:22:01.772 "subsystem": "sock", 00:22:01.772 "config": [ 00:22:01.772 { 00:22:01.772 "method": "sock_set_default_impl", 00:22:01.772 "params": { 00:22:01.772 "impl_name": "posix" 00:22:01.772 } 00:22:01.772 }, 00:22:01.772 { 00:22:01.772 "method": "sock_impl_set_options", 00:22:01.772 "params": { 00:22:01.772 "impl_name": "ssl", 00:22:01.772 "recv_buf_size": 4096, 00:22:01.772 "send_buf_size": 4096, 00:22:01.772 "enable_recv_pipe": true, 00:22:01.773 "enable_quickack": false, 00:22:01.773 "enable_placement_id": 0, 00:22:01.773 "enable_zerocopy_send_server": true, 00:22:01.773 "enable_zerocopy_send_client": false, 00:22:01.773 "zerocopy_threshold": 0, 00:22:01.773 "tls_version": 0, 00:22:01.773 "enable_ktls": false 00:22:01.773 } 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "method": "sock_impl_set_options", 00:22:01.773 "params": { 00:22:01.773 "impl_name": "posix", 00:22:01.773 "recv_buf_size": 2097152, 00:22:01.773 "send_buf_size": 2097152, 00:22:01.773 "enable_recv_pipe": true, 00:22:01.773 "enable_quickack": false, 00:22:01.773 "enable_placement_id": 0, 00:22:01.773 "enable_zerocopy_send_server": true, 00:22:01.773 "enable_zerocopy_send_client": false, 00:22:01.773 "zerocopy_threshold": 0, 00:22:01.773 "tls_version": 0, 00:22:01.773 "enable_ktls": false 00:22:01.773 } 00:22:01.773 } 00:22:01.773 ] 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "subsystem": "vmd", 00:22:01.773 "config": [] 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "subsystem": "accel", 00:22:01.773 "config": [ 00:22:01.773 { 00:22:01.773 "method": "accel_set_options", 00:22:01.773 "params": { 00:22:01.773 "small_cache_size": 128, 00:22:01.773 "large_cache_size": 16, 00:22:01.773 "task_count": 2048, 00:22:01.773 "sequence_count": 2048, 00:22:01.773 "buf_count": 2048 00:22:01.773 } 00:22:01.773 } 00:22:01.773 ] 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "subsystem": "bdev", 00:22:01.773 "config": [ 00:22:01.773 { 00:22:01.773 "method": "bdev_set_options", 00:22:01.773 "params": { 00:22:01.773 "bdev_io_pool_size": 65535, 00:22:01.773 "bdev_io_cache_size": 256, 00:22:01.773 "bdev_auto_examine": true, 00:22:01.773 "iobuf_small_cache_size": 128, 00:22:01.773 "iobuf_large_cache_size": 16 00:22:01.773 } 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "method": "bdev_raid_set_options", 00:22:01.773 "params": { 00:22:01.773 "process_window_size_kb": 1024, 00:22:01.773 "process_max_bandwidth_mb_sec": 0 00:22:01.773 } 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "method": "bdev_iscsi_set_options", 00:22:01.773 "params": { 00:22:01.773 "timeout_sec": 30 00:22:01.773 } 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "method": "bdev_nvme_set_options", 00:22:01.773 "params": { 00:22:01.773 "action_on_timeout": "none", 00:22:01.773 "timeout_us": 0, 00:22:01.773 "timeout_admin_us": 0, 00:22:01.773 "keep_alive_timeout_ms": 10000, 00:22:01.773 "arbitration_burst": 0, 00:22:01.773 "low_priority_weight": 0, 00:22:01.773 "medium_priority_weight": 0, 00:22:01.773 "high_priority_weight": 0, 00:22:01.773 "nvme_adminq_poll_period_us": 10000, 00:22:01.773 "nvme_ioq_poll_period_us": 0, 00:22:01.773 "io_queue_requests": 512, 00:22:01.773 "delay_cmd_submit": true, 00:22:01.773 "transport_retry_count": 4, 00:22:01.773 "bdev_retry_count": 3, 00:22:01.773 "transport_ack_timeout": 0, 00:22:01.773 "ctrlr_loss_timeout_sec": 0, 00:22:01.773 "reconnect_delay_sec": 0, 00:22:01.773 "fast_io_fail_timeout_sec": 0, 00:22:01.773 "disable_auto_failback": false, 00:22:01.773 "generate_uuids": false, 00:22:01.773 "transport_tos": 0, 00:22:01.773 "nvme_error_stat": false, 00:22:01.773 "rdma_srq_size": 0, 00:22:01.773 "io_path_stat": false, 00:22:01.773 "allow_accel_sequence": false, 00:22:01.773 "rdma_max_cq_size": 0, 00:22:01.773 "rdma_cm_event_timeout_ms": 0, 00:22:01.773 "dhchap_digests": [ 00:22:01.773 "sha256", 00:22:01.773 "sha384", 00:22:01.773 "sha512" 00:22:01.773 ], 00:22:01.773 "dhchap_dhgroups": [ 00:22:01.773 "null", 00:22:01.773 "ffdhe2048", 00:22:01.773 "ffdhe3072", 00:22:01.773 "ffdhe4096", 00:22:01.773 "ffdhe6144", 00:22:01.773 "ffdhe8192" 00:22:01.773 ] 00:22:01.773 } 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "method": "bdev_nvme_attach_controller", 00:22:01.773 "params": { 00:22:01.773 "name": "nvme0", 00:22:01.773 "trtype": "TCP", 00:22:01.773 "adrfam": "IPv4", 00:22:01.773 "traddr": "10.0.0.2", 00:22:01.773 "trsvcid": "4420", 00:22:01.773 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.773 "prchk_reftag": false, 00:22:01.773 "prchk_guard": false, 00:22:01.773 "ctrlr_loss_timeout_sec": 0, 00:22:01.773 "reconnect_delay_sec": 0, 00:22:01.773 "fast_io_fail_timeout_sec": 0, 00:22:01.773 "psk": "key0", 00:22:01.773 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.773 "hdgst": false, 00:22:01.773 "ddgst": false, 00:22:01.773 "multipath": "multipath" 00:22:01.773 } 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "method": "bdev_nvme_set_hotplug", 00:22:01.773 "params": { 00:22:01.773 "period_us": 100000, 00:22:01.773 "enable": false 00:22:01.773 } 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "method": "bdev_enable_histogram", 00:22:01.773 "params": { 00:22:01.773 "name": "nvme0n1", 00:22:01.773 "enable": true 00:22:01.773 } 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "method": "bdev_wait_for_examine" 00:22:01.773 } 00:22:01.773 ] 00:22:01.773 }, 00:22:01.773 { 00:22:01.773 "subsystem": "nbd", 00:22:01.773 "config": [] 00:22:01.773 } 00:22:01.773 ] 00:22:01.773 }' 00:22:01.773 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.773 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.773 [2024-11-06 08:58:14.853774] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:22:01.773 [2024-11-06 08:58:14.853898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854672 ] 00:22:01.773 [2024-11-06 08:58:14.935751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.773 [2024-11-06 08:58:15.009732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.031 [2024-11-06 08:58:15.202370] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:02.289 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:02.546 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.546 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.546 Running I/O for 1 seconds... 00:22:03.479 3224.00 IOPS, 12.59 MiB/s 00:22:03.479 Latency(us) 00:22:03.479 [2024-11-06T07:58:16.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.479 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:03.479 Verification LBA range: start 0x0 length 0x2000 00:22:03.479 nvme0n1 : 1.02 3298.20 12.88 0.00 0.00 38521.97 6310.87 38641.97 00:22:03.479 [2024-11-06T07:58:16.768Z] =================================================================================================================== 00:22:03.479 [2024-11-06T07:58:16.768Z] Total : 3298.20 12.88 0.00 0.00 38521.97 6310.87 38641.97 00:22:03.479 { 00:22:03.479 "results": [ 00:22:03.479 { 00:22:03.479 "job": "nvme0n1", 00:22:03.479 "core_mask": "0x2", 00:22:03.479 "workload": "verify", 00:22:03.479 "status": "finished", 00:22:03.479 "verify_range": { 00:22:03.479 "start": 0, 00:22:03.479 "length": 8192 00:22:03.479 }, 00:22:03.479 "queue_depth": 128, 00:22:03.479 "io_size": 4096, 00:22:03.479 "runtime": 1.016614, 00:22:03.479 "iops": 3298.203644647821, 00:22:03.479 "mibps": 12.88360798690555, 00:22:03.479 "io_failed": 0, 00:22:03.479 "io_timeout": 0, 00:22:03.479 "avg_latency_us": 38521.97262816052, 00:22:03.479 "min_latency_us": 6310.874074074074, 00:22:03.479 "max_latency_us": 38641.96740740741 00:22:03.479 } 00:22:03.479 ], 00:22:03.479 "core_count": 1 00:22:03.479 } 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:03.479 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:03.479 nvmf_trace.0 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 854672 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 854672 ']' 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 854672 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854672 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854672' 00:22:03.738 killing process with pid 854672 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 854672 00:22:03.738 Received shutdown signal, test time was about 1.000000 seconds 00:22:03.738 00:22:03.738 Latency(us) 00:22:03.738 [2024-11-06T07:58:17.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.738 [2024-11-06T07:58:17.027Z] =================================================================================================================== 00:22:03.738 [2024-11-06T07:58:17.027Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.738 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 854672 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.996 rmmod nvme_tcp 00:22:03.996 rmmod nvme_fabrics 00:22:03.996 rmmod nvme_keyring 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 854521 ']' 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 854521 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 854521 ']' 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 854521 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854521 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854521' 00:22:03.996 killing process with pid 854521 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 854521 00:22:03.996 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 854521 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.255 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.dj4drna4Dn /tmp/tmp.KCayXrdUBQ /tmp/tmp.4JdmXq92Yv 00:22:06.793 00:22:06.793 real 1m23.498s 00:22:06.793 user 2m17.427s 00:22:06.793 sys 0m26.229s 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.793 ************************************ 00:22:06.793 END TEST nvmf_tls 00:22:06.793 ************************************ 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:06.793 ************************************ 00:22:06.793 START TEST nvmf_fips 00:22:06.793 ************************************ 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:06.793 * Looking for test storage... 00:22:06.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1689 -- # lcov --version 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:06.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.793 --rc genhtml_branch_coverage=1 00:22:06.793 --rc genhtml_function_coverage=1 00:22:06.793 --rc genhtml_legend=1 00:22:06.793 --rc geninfo_all_blocks=1 00:22:06.793 --rc geninfo_unexecuted_blocks=1 00:22:06.793 00:22:06.793 ' 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:06.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.793 --rc genhtml_branch_coverage=1 00:22:06.793 --rc genhtml_function_coverage=1 00:22:06.793 --rc genhtml_legend=1 00:22:06.793 --rc geninfo_all_blocks=1 00:22:06.793 --rc geninfo_unexecuted_blocks=1 00:22:06.793 00:22:06.793 ' 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:06.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.793 --rc genhtml_branch_coverage=1 00:22:06.793 --rc genhtml_function_coverage=1 00:22:06.793 --rc genhtml_legend=1 00:22:06.793 --rc geninfo_all_blocks=1 00:22:06.793 --rc geninfo_unexecuted_blocks=1 00:22:06.793 00:22:06.793 ' 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:06.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.793 --rc genhtml_branch_coverage=1 00:22:06.793 --rc genhtml_function_coverage=1 00:22:06.793 --rc genhtml_legend=1 00:22:06.793 --rc geninfo_all_blocks=1 00:22:06.793 --rc geninfo_unexecuted_blocks=1 00:22:06.793 00:22:06.793 ' 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.793 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:06.794 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:06.794 Error setting digest 00:22:06.795 401214449E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:06.795 401214449E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.795 08:58:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.327 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:09.327 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:09.327 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:09.327 Found net devices under 0000:09:00.0: cvl_0_0 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:09.327 Found net devices under 0000:09:00.1: cvl_0_1 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:09.327 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:09.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:22:09.328 00:22:09.328 --- 10.0.0.2 ping statistics --- 00:22:09.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.328 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:22:09.328 00:22:09.328 --- 10.0.0.1 ping statistics --- 00:22:09.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.328 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=857026 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 857026 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 857026 ']' 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.328 [2024-11-06 08:58:22.260360] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:22:09.328 [2024-11-06 08:58:22.260450] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.328 [2024-11-06 08:58:22.333394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.328 [2024-11-06 08:58:22.388677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.328 [2024-11-06 08:58:22.388738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.328 [2024-11-06 08:58:22.388752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.328 [2024-11-06 08:58:22.388763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.328 [2024-11-06 08:58:22.388772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.328 [2024-11-06 08:58:22.389406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.MuF 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.MuF 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.MuF 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.MuF 00:22:09.328 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:09.586 [2024-11-06 08:58:22.798123] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.586 [2024-11-06 08:58:22.814106] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:09.586 [2024-11-06 08:58:22.814352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.586 malloc0 00:22:09.586 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:09.586 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=857064 00:22:09.845 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:09.845 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 857064 /var/tmp/bdevperf.sock 00:22:09.845 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 857064 ']' 00:22:09.845 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.845 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:09.845 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.845 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:09.845 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.845 [2024-11-06 08:58:22.948700] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:22:09.845 [2024-11-06 08:58:22.948798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid857064 ] 00:22:09.845 [2024-11-06 08:58:23.014504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.845 [2024-11-06 08:58:23.074516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.102 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.102 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:10.103 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.MuF 00:22:10.360 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:10.617 [2024-11-06 08:58:23.702662] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.617 TLSTESTn1 00:22:10.617 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.875 Running I/O for 10 seconds... 00:22:12.740 3339.00 IOPS, 13.04 MiB/s [2024-11-06T07:58:26.962Z] 3361.00 IOPS, 13.13 MiB/s [2024-11-06T07:58:28.335Z] 3408.67 IOPS, 13.32 MiB/s [2024-11-06T07:58:29.267Z] 3413.00 IOPS, 13.33 MiB/s [2024-11-06T07:58:30.199Z] 3418.00 IOPS, 13.35 MiB/s [2024-11-06T07:58:31.132Z] 3415.67 IOPS, 13.34 MiB/s [2024-11-06T07:58:32.064Z] 3407.71 IOPS, 13.31 MiB/s [2024-11-06T07:58:32.997Z] 3400.12 IOPS, 13.28 MiB/s [2024-11-06T07:58:34.370Z] 3407.22 IOPS, 13.31 MiB/s [2024-11-06T07:58:34.370Z] 3403.80 IOPS, 13.30 MiB/s 00:22:21.081 Latency(us) 00:22:21.081 [2024-11-06T07:58:34.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.081 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.081 Verification LBA range: start 0x0 length 0x2000 00:22:21.081 TLSTESTn1 : 10.02 3408.47 13.31 0.00 0.00 37482.99 9077.95 29321.29 00:22:21.081 [2024-11-06T07:58:34.370Z] =================================================================================================================== 00:22:21.081 [2024-11-06T07:58:34.370Z] Total : 3408.47 13.31 0.00 0.00 37482.99 9077.95 29321.29 00:22:21.081 { 00:22:21.081 "results": [ 00:22:21.081 { 00:22:21.081 "job": "TLSTESTn1", 00:22:21.081 "core_mask": "0x4", 00:22:21.081 "workload": "verify", 00:22:21.081 "status": "finished", 00:22:21.081 "verify_range": { 00:22:21.081 "start": 0, 00:22:21.081 "length": 8192 00:22:21.081 }, 00:22:21.081 "queue_depth": 128, 00:22:21.081 "io_size": 4096, 00:22:21.081 "runtime": 10.022961, 00:22:21.081 "iops": 3408.4738033002423, 00:22:21.081 "mibps": 13.314350794141571, 00:22:21.081 "io_failed": 0, 00:22:21.081 "io_timeout": 0, 00:22:21.081 "avg_latency_us": 37482.99107637568, 00:22:21.081 "min_latency_us": 9077.94962962963, 00:22:21.081 "max_latency_us": 29321.291851851853 00:22:21.081 } 00:22:21.081 ], 00:22:21.081 "core_count": 1 00:22:21.081 } 00:22:21.081 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:21.081 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:21.081 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:21.081 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:21.081 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:21.081 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:21.081 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:21.081 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:21.082 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:21.082 08:58:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:21.082 nvmf_trace.0 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 857064 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 857064 ']' 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 857064 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 857064 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 857064' 00:22:21.082 killing process with pid 857064 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 857064 00:22:21.082 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.082 00:22:21.082 Latency(us) 00:22:21.082 [2024-11-06T07:58:34.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.082 [2024-11-06T07:58:34.371Z] =================================================================================================================== 00:22:21.082 [2024-11-06T07:58:34.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 857064 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.082 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.082 rmmod nvme_tcp 00:22:21.082 rmmod nvme_fabrics 00:22:21.082 rmmod nvme_keyring 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 857026 ']' 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 857026 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 857026 ']' 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 857026 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 857026 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 857026' 00:22:21.340 killing process with pid 857026 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 857026 00:22:21.340 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 857026 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.598 08:58:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.504 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.504 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.MuF 00:22:23.504 00:22:23.504 real 0m17.180s 00:22:23.504 user 0m22.243s 00:22:23.504 sys 0m5.694s 00:22:23.504 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.504 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:23.504 ************************************ 00:22:23.504 END TEST nvmf_fips 00:22:23.504 ************************************ 00:22:23.504 08:58:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:23.504 08:58:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:23.504 08:58:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.504 08:58:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:23.504 ************************************ 00:22:23.504 START TEST nvmf_control_msg_list 00:22:23.504 ************************************ 00:22:23.504 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:23.763 * Looking for test storage... 00:22:23.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1689 -- # lcov --version 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:23.763 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:23.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.764 --rc genhtml_branch_coverage=1 00:22:23.764 --rc genhtml_function_coverage=1 00:22:23.764 --rc genhtml_legend=1 00:22:23.764 --rc geninfo_all_blocks=1 00:22:23.764 --rc geninfo_unexecuted_blocks=1 00:22:23.764 00:22:23.764 ' 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:23.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.764 --rc genhtml_branch_coverage=1 00:22:23.764 --rc genhtml_function_coverage=1 00:22:23.764 --rc genhtml_legend=1 00:22:23.764 --rc geninfo_all_blocks=1 00:22:23.764 --rc geninfo_unexecuted_blocks=1 00:22:23.764 00:22:23.764 ' 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:23.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.764 --rc genhtml_branch_coverage=1 00:22:23.764 --rc genhtml_function_coverage=1 00:22:23.764 --rc genhtml_legend=1 00:22:23.764 --rc geninfo_all_blocks=1 00:22:23.764 --rc geninfo_unexecuted_blocks=1 00:22:23.764 00:22:23.764 ' 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:23.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.764 --rc genhtml_branch_coverage=1 00:22:23.764 --rc genhtml_function_coverage=1 00:22:23.764 --rc genhtml_legend=1 00:22:23.764 --rc geninfo_all_blocks=1 00:22:23.764 --rc geninfo_unexecuted_blocks=1 00:22:23.764 00:22:23.764 ' 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.764 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.765 08:58:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.300 08:58:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:26.300 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:26.300 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:26.300 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:26.301 Found net devices under 0000:09:00.0: cvl_0_0 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:26.301 Found net devices under 0000:09:00.1: cvl_0_1 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:22:26.301 00:22:26.301 --- 10.0.0.2 ping statistics --- 00:22:26.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.301 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:22:26.301 00:22:26.301 --- 10.0.0.1 ping statistics --- 00:22:26.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.301 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=860440 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 860440 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 860440 ']' 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.301 [2024-11-06 08:58:39.231864] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:22:26.301 [2024-11-06 08:58:39.231948] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.301 [2024-11-06 08:58:39.301940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.301 [2024-11-06 08:58:39.358120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.301 [2024-11-06 08:58:39.358198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.301 [2024-11-06 08:58:39.358219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.301 [2024-11-06 08:58:39.358230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.301 [2024-11-06 08:58:39.358240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.301 [2024-11-06 08:58:39.358810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:26.301 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.302 [2024-11-06 08:58:39.505276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.302 Malloc0 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.302 [2024-11-06 08:58:39.544566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=860464 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=860465 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=860466 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 860464 00:22:26.302 08:58:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.564 [2024-11-06 08:58:39.603068] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:26.564 [2024-11-06 08:58:39.613670] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:26.564 [2024-11-06 08:58:39.613978] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:27.504 Initializing NVMe Controllers 00:22:27.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:27.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:27.504 Initialization complete. Launching workers. 00:22:27.504 ======================================================== 00:22:27.504 Latency(us) 00:22:27.504 Device Information : IOPS MiB/s Average min max 00:22:27.504 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40890.93 40725.99 40999.12 00:22:27.504 ======================================================== 00:22:27.504 Total : 25.00 0.10 40890.93 40725.99 40999.12 00:22:27.504 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 860465 00:22:27.504 Initializing NVMe Controllers 00:22:27.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:27.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:27.504 Initialization complete. Launching workers. 00:22:27.504 ======================================================== 00:22:27.504 Latency(us) 00:22:27.504 Device Information : IOPS MiB/s Average min max 00:22:27.504 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3786.00 14.79 263.66 189.34 521.28 00:22:27.504 ======================================================== 00:22:27.504 Total : 3786.00 14.79 263.66 189.34 521.28 00:22:27.504 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 860466 00:22:27.504 Initializing NVMe Controllers 00:22:27.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:27.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:27.504 Initialization complete. Launching workers. 00:22:27.504 ======================================================== 00:22:27.504 Latency(us) 00:22:27.504 Device Information : IOPS MiB/s Average min max 00:22:27.504 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3892.00 15.20 256.57 157.29 644.01 00:22:27.504 ======================================================== 00:22:27.504 Total : 3892.00 15.20 256.57 157.29 644.01 00:22:27.504 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:27.504 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:27.504 rmmod nvme_tcp 00:22:27.504 rmmod nvme_fabrics 00:22:27.793 rmmod nvme_keyring 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 860440 ']' 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 860440 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 860440 ']' 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 860440 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 860440 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 860440' 00:22:27.793 killing process with pid 860440 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 860440 00:22:27.793 08:58:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 860440 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.076 08:58:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:29.981 00:22:29.981 real 0m6.401s 00:22:29.981 user 0m5.537s 00:22:29.981 sys 0m2.652s 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.981 ************************************ 00:22:29.981 END TEST nvmf_control_msg_list 00:22:29.981 ************************************ 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:29.981 ************************************ 00:22:29.981 START TEST nvmf_wait_for_buf 00:22:29.981 ************************************ 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:29.981 * Looking for test storage... 00:22:29.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1689 -- # lcov --version 00:22:29.981 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:30.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.240 --rc genhtml_branch_coverage=1 00:22:30.240 --rc genhtml_function_coverage=1 00:22:30.240 --rc genhtml_legend=1 00:22:30.240 --rc geninfo_all_blocks=1 00:22:30.240 --rc geninfo_unexecuted_blocks=1 00:22:30.240 00:22:30.240 ' 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:30.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.240 --rc genhtml_branch_coverage=1 00:22:30.240 --rc genhtml_function_coverage=1 00:22:30.240 --rc genhtml_legend=1 00:22:30.240 --rc geninfo_all_blocks=1 00:22:30.240 --rc geninfo_unexecuted_blocks=1 00:22:30.240 00:22:30.240 ' 00:22:30.240 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:30.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.240 --rc genhtml_branch_coverage=1 00:22:30.240 --rc genhtml_function_coverage=1 00:22:30.240 --rc genhtml_legend=1 00:22:30.241 --rc geninfo_all_blocks=1 00:22:30.241 --rc geninfo_unexecuted_blocks=1 00:22:30.241 00:22:30.241 ' 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:30.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.241 --rc genhtml_branch_coverage=1 00:22:30.241 --rc genhtml_function_coverage=1 00:22:30.241 --rc genhtml_legend=1 00:22:30.241 --rc geninfo_all_blocks=1 00:22:30.241 --rc geninfo_unexecuted_blocks=1 00:22:30.241 00:22:30.241 ' 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:30.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.241 08:58:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:32.774 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:32.774 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:32.774 Found net devices under 0000:09:00.0: cvl_0_0 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:32.774 Found net devices under 0000:09:00.1: cvl_0_1 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.774 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:22:32.775 00:22:32.775 --- 10.0.0.2 ping statistics --- 00:22:32.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.775 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:22:32.775 00:22:32.775 --- 10.0.0.1 ping statistics --- 00:22:32.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.775 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=862548 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 862548 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 862548 ']' 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 [2024-11-06 08:58:45.666938] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:22:32.775 [2024-11-06 08:58:45.667019] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.775 [2024-11-06 08:58:45.737351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.775 [2024-11-06 08:58:45.792566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.775 [2024-11-06 08:58:45.792636] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.775 [2024-11-06 08:58:45.792650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.775 [2024-11-06 08:58:45.792660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.775 [2024-11-06 08:58:45.792669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.775 [2024-11-06 08:58:45.793264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 08:58:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 Malloc0 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 [2024-11-06 08:58:46.041627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.775 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:33.033 [2024-11-06 08:58:46.065838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.033 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.033 08:58:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:33.033 [2024-11-06 08:58:46.153952] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:34.407 Initializing NVMe Controllers 00:22:34.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:34.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:34.407 Initialization complete. Launching workers. 00:22:34.407 ======================================================== 00:22:34.407 Latency(us) 00:22:34.407 Device Information : IOPS MiB/s Average min max 00:22:34.407 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33659.47 30204.54 71810.44 00:22:34.407 ======================================================== 00:22:34.407 Total : 124.00 15.50 33659.47 30204.54 71810.44 00:22:34.407 00:22:34.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:34.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:34.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:34.407 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.666 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:22:34.666 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:22:34.666 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:34.666 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:34.666 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:34.666 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:34.666 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.666 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.667 rmmod nvme_tcp 00:22:34.667 rmmod nvme_fabrics 00:22:34.667 rmmod nvme_keyring 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 862548 ']' 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 862548 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 862548 ']' 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 862548 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 862548 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 862548' 00:22:34.667 killing process with pid 862548 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 862548 00:22:34.667 08:58:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 862548 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.926 08:58:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.833 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.833 00:22:36.833 real 0m6.866s 00:22:36.833 user 0m3.335s 00:22:36.833 sys 0m1.994s 00:22:36.833 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:36.833 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:36.833 ************************************ 00:22:36.833 END TEST nvmf_wait_for_buf 00:22:36.833 ************************************ 00:22:36.833 08:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:36.833 08:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:36.833 08:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:36.833 08:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:36.833 08:58:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:36.833 08:58:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:39.365 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:39.365 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:39.365 Found net devices under 0000:09:00.0: cvl_0_0 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:39.365 Found net devices under 0000:09:00.1: cvl_0_1 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.365 08:58:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.365 ************************************ 00:22:39.365 START TEST nvmf_perf_adq 00:22:39.365 ************************************ 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:39.366 * Looking for test storage... 00:22:39.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1689 -- # lcov --version 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:22:39.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.366 --rc genhtml_branch_coverage=1 00:22:39.366 --rc genhtml_function_coverage=1 00:22:39.366 --rc genhtml_legend=1 00:22:39.366 --rc geninfo_all_blocks=1 00:22:39.366 --rc geninfo_unexecuted_blocks=1 00:22:39.366 00:22:39.366 ' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:22:39.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.366 --rc genhtml_branch_coverage=1 00:22:39.366 --rc genhtml_function_coverage=1 00:22:39.366 --rc genhtml_legend=1 00:22:39.366 --rc geninfo_all_blocks=1 00:22:39.366 --rc geninfo_unexecuted_blocks=1 00:22:39.366 00:22:39.366 ' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:22:39.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.366 --rc genhtml_branch_coverage=1 00:22:39.366 --rc genhtml_function_coverage=1 00:22:39.366 --rc genhtml_legend=1 00:22:39.366 --rc geninfo_all_blocks=1 00:22:39.366 --rc geninfo_unexecuted_blocks=1 00:22:39.366 00:22:39.366 ' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:22:39.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.366 --rc genhtml_branch_coverage=1 00:22:39.366 --rc genhtml_function_coverage=1 00:22:39.366 --rc genhtml_legend=1 00:22:39.366 --rc geninfo_all_blocks=1 00:22:39.366 --rc geninfo_unexecuted_blocks=1 00:22:39.366 00:22:39.366 ' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:39.366 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.367 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.268 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:41.269 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:41.269 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:41.269 Found net devices under 0000:09:00.0: cvl_0_0 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:41.269 Found net devices under 0000:09:00.1: cvl_0_1 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:41.269 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:41.835 08:58:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:43.736 08:58:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:49.009 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:49.009 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:49.009 Found net devices under 0000:09:00.0: cvl_0_0 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:49.009 Found net devices under 0000:09:00.1: cvl_0_1 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.009 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:22:49.010 00:22:49.010 --- 10.0.0.2 ping statistics --- 00:22:49.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.010 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:22:49.010 00:22:49.010 --- 10.0.0.1 ping statistics --- 00:22:49.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.010 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=867384 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 867384 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 867384 ']' 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.010 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.010 [2024-11-06 08:59:02.241031] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:22:49.010 [2024-11-06 08:59:02.241107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.268 [2024-11-06 08:59:02.313058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.268 [2024-11-06 08:59:02.371805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.268 [2024-11-06 08:59:02.371889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.268 [2024-11-06 08:59:02.371913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.268 [2024-11-06 08:59:02.371925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.268 [2024-11-06 08:59:02.371934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.268 [2024-11-06 08:59:02.373438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.268 [2024-11-06 08:59:02.373503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.268 [2024-11-06 08:59:02.373556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.268 [2024-11-06 08:59:02.373559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.268 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 [2024-11-06 08:59:02.640629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 Malloc1 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.527 [2024-11-06 08:59:02.708708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=867417 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:49.527 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:52.056 "tick_rate": 2700000000, 00:22:52.056 "poll_groups": [ 00:22:52.056 { 00:22:52.056 "name": "nvmf_tgt_poll_group_000", 00:22:52.056 "admin_qpairs": 1, 00:22:52.056 "io_qpairs": 1, 00:22:52.056 "current_admin_qpairs": 1, 00:22:52.056 "current_io_qpairs": 1, 00:22:52.056 "pending_bdev_io": 0, 00:22:52.056 "completed_nvme_io": 19476, 00:22:52.056 "transports": [ 00:22:52.056 { 00:22:52.056 "trtype": "TCP" 00:22:52.056 } 00:22:52.056 ] 00:22:52.056 }, 00:22:52.056 { 00:22:52.056 "name": "nvmf_tgt_poll_group_001", 00:22:52.056 "admin_qpairs": 0, 00:22:52.056 "io_qpairs": 1, 00:22:52.056 "current_admin_qpairs": 0, 00:22:52.056 "current_io_qpairs": 1, 00:22:52.056 "pending_bdev_io": 0, 00:22:52.056 "completed_nvme_io": 19959, 00:22:52.056 "transports": [ 00:22:52.056 { 00:22:52.056 "trtype": "TCP" 00:22:52.056 } 00:22:52.056 ] 00:22:52.056 }, 00:22:52.056 { 00:22:52.056 "name": "nvmf_tgt_poll_group_002", 00:22:52.056 "admin_qpairs": 0, 00:22:52.056 "io_qpairs": 1, 00:22:52.056 "current_admin_qpairs": 0, 00:22:52.056 "current_io_qpairs": 1, 00:22:52.056 "pending_bdev_io": 0, 00:22:52.056 "completed_nvme_io": 19433, 00:22:52.056 "transports": [ 00:22:52.056 { 00:22:52.056 "trtype": "TCP" 00:22:52.056 } 00:22:52.056 ] 00:22:52.056 }, 00:22:52.056 { 00:22:52.056 "name": "nvmf_tgt_poll_group_003", 00:22:52.056 "admin_qpairs": 0, 00:22:52.056 "io_qpairs": 1, 00:22:52.056 "current_admin_qpairs": 0, 00:22:52.056 "current_io_qpairs": 1, 00:22:52.056 "pending_bdev_io": 0, 00:22:52.056 "completed_nvme_io": 19899, 00:22:52.056 "transports": [ 00:22:52.056 { 00:22:52.056 "trtype": "TCP" 00:22:52.056 } 00:22:52.056 ] 00:22:52.056 } 00:22:52.056 ] 00:22:52.056 }' 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:52.056 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 867417 00:23:00.162 Initializing NVMe Controllers 00:23:00.162 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:00.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:00.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:00.162 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:00.162 Initialization complete. Launching workers. 00:23:00.162 ======================================================== 00:23:00.162 Latency(us) 00:23:00.162 Device Information : IOPS MiB/s Average min max 00:23:00.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10201.70 39.85 6273.71 2317.59 10871.18 00:23:00.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10592.70 41.38 6043.04 1879.09 10368.92 00:23:00.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10418.00 40.70 6142.83 2516.30 10302.78 00:23:00.162 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10152.60 39.66 6303.13 2459.41 10614.47 00:23:00.162 ======================================================== 00:23:00.162 Total : 41364.98 161.58 6188.90 1879.09 10871.18 00:23:00.162 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:00.162 rmmod nvme_tcp 00:23:00.162 rmmod nvme_fabrics 00:23:00.162 rmmod nvme_keyring 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 867384 ']' 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 867384 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 867384 ']' 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 867384 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 867384 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 867384' 00:23:00.162 killing process with pid 867384 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 867384 00:23:00.162 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 867384 00:23:00.162 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:00.162 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:00.162 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:00.162 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:00.162 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:23:00.162 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:00.162 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:23:00.162 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:00.162 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:00.163 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.163 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.163 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.066 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:02.066 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:02.066 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:02.066 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:02.632 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:04.532 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:09.805 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:09.805 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.805 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:09.806 Found net devices under 0000:09:00.0: cvl_0_0 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:09.806 Found net devices under 0000:09:00.1: cvl_0_1 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:09.806 00:23:09.806 --- 10.0.0.2 ping statistics --- 00:23:09.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.806 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:23:09.806 00:23:09.806 --- 10.0.0.1 ping statistics --- 00:23:09.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.806 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:09.806 net.core.busy_poll = 1 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:09.806 net.core.busy_read = 1 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:09.806 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=870033 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 870033 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 870033 ']' 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:09.806 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.065 [2024-11-06 08:59:23.138083] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:23:10.065 [2024-11-06 08:59:23.138185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.065 [2024-11-06 08:59:23.211034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.065 [2024-11-06 08:59:23.266025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.065 [2024-11-06 08:59:23.266079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.065 [2024-11-06 08:59:23.266103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.065 [2024-11-06 08:59:23.266114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.065 [2024-11-06 08:59:23.266124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.065 [2024-11-06 08:59:23.267517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.065 [2024-11-06 08:59:23.267576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.065 [2024-11-06 08:59:23.267641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.065 [2024-11-06 08:59:23.267644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.324 [2024-11-06 08:59:23.554585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.324 Malloc1 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.324 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.582 [2024-11-06 08:59:23.615591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.582 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.582 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=870081 00:23:10.582 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:10.582 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:12.482 "tick_rate": 2700000000, 00:23:12.482 "poll_groups": [ 00:23:12.482 { 00:23:12.482 "name": "nvmf_tgt_poll_group_000", 00:23:12.482 "admin_qpairs": 1, 00:23:12.482 "io_qpairs": 3, 00:23:12.482 "current_admin_qpairs": 1, 00:23:12.482 "current_io_qpairs": 3, 00:23:12.482 "pending_bdev_io": 0, 00:23:12.482 "completed_nvme_io": 26412, 00:23:12.482 "transports": [ 00:23:12.482 { 00:23:12.482 "trtype": "TCP" 00:23:12.482 } 00:23:12.482 ] 00:23:12.482 }, 00:23:12.482 { 00:23:12.482 "name": "nvmf_tgt_poll_group_001", 00:23:12.482 "admin_qpairs": 0, 00:23:12.482 "io_qpairs": 1, 00:23:12.482 "current_admin_qpairs": 0, 00:23:12.482 "current_io_qpairs": 1, 00:23:12.482 "pending_bdev_io": 0, 00:23:12.482 "completed_nvme_io": 24797, 00:23:12.482 "transports": [ 00:23:12.482 { 00:23:12.482 "trtype": "TCP" 00:23:12.482 } 00:23:12.482 ] 00:23:12.482 }, 00:23:12.482 { 00:23:12.482 "name": "nvmf_tgt_poll_group_002", 00:23:12.482 "admin_qpairs": 0, 00:23:12.482 "io_qpairs": 0, 00:23:12.482 "current_admin_qpairs": 0, 00:23:12.482 "current_io_qpairs": 0, 00:23:12.482 "pending_bdev_io": 0, 00:23:12.482 "completed_nvme_io": 0, 00:23:12.482 "transports": [ 00:23:12.482 { 00:23:12.482 "trtype": "TCP" 00:23:12.482 } 00:23:12.482 ] 00:23:12.482 }, 00:23:12.482 { 00:23:12.482 "name": "nvmf_tgt_poll_group_003", 00:23:12.482 "admin_qpairs": 0, 00:23:12.482 "io_qpairs": 0, 00:23:12.482 "current_admin_qpairs": 0, 00:23:12.482 "current_io_qpairs": 0, 00:23:12.482 "pending_bdev_io": 0, 00:23:12.482 "completed_nvme_io": 0, 00:23:12.482 "transports": [ 00:23:12.482 { 00:23:12.482 "trtype": "TCP" 00:23:12.482 } 00:23:12.482 ] 00:23:12.482 } 00:23:12.482 ] 00:23:12.482 }' 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:12.482 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 870081 00:23:20.644 Initializing NVMe Controllers 00:23:20.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:20.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:20.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:20.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:20.644 Initialization complete. Launching workers. 00:23:20.644 ======================================================== 00:23:20.644 Latency(us) 00:23:20.644 Device Information : IOPS MiB/s Average min max 00:23:20.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4382.17 17.12 14673.71 2129.93 61706.36 00:23:20.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4509.57 17.62 14226.63 1893.53 61530.16 00:23:20.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13299.41 51.95 4812.28 2197.32 47316.45 00:23:20.644 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4943.47 19.31 12954.69 1475.44 62721.07 00:23:20.644 ======================================================== 00:23:20.644 Total : 27134.62 105.99 9452.88 1475.44 62721.07 00:23:20.644 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.644 rmmod nvme_tcp 00:23:20.644 rmmod nvme_fabrics 00:23:20.644 rmmod nvme_keyring 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 870033 ']' 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 870033 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 870033 ']' 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 870033 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 870033 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 870033' 00:23:20.644 killing process with pid 870033 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 870033 00:23:20.644 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 870033 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.903 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:23.441 00:23:23.441 real 0m43.981s 00:23:23.441 user 2m40.384s 00:23:23.441 sys 0m9.389s 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.441 ************************************ 00:23:23.441 END TEST nvmf_perf_adq 00:23:23.441 ************************************ 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:23.441 ************************************ 00:23:23.441 START TEST nvmf_shutdown 00:23:23.441 ************************************ 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:23.441 * Looking for test storage... 00:23:23.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # lcov --version 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:23:23.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.441 --rc genhtml_branch_coverage=1 00:23:23.441 --rc genhtml_function_coverage=1 00:23:23.441 --rc genhtml_legend=1 00:23:23.441 --rc geninfo_all_blocks=1 00:23:23.441 --rc geninfo_unexecuted_blocks=1 00:23:23.441 00:23:23.441 ' 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:23:23.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.441 --rc genhtml_branch_coverage=1 00:23:23.441 --rc genhtml_function_coverage=1 00:23:23.441 --rc genhtml_legend=1 00:23:23.441 --rc geninfo_all_blocks=1 00:23:23.441 --rc geninfo_unexecuted_blocks=1 00:23:23.441 00:23:23.441 ' 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:23:23.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.441 --rc genhtml_branch_coverage=1 00:23:23.441 --rc genhtml_function_coverage=1 00:23:23.441 --rc genhtml_legend=1 00:23:23.441 --rc geninfo_all_blocks=1 00:23:23.441 --rc geninfo_unexecuted_blocks=1 00:23:23.441 00:23:23.441 ' 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:23:23.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.441 --rc genhtml_branch_coverage=1 00:23:23.441 --rc genhtml_function_coverage=1 00:23:23.441 --rc genhtml_legend=1 00:23:23.441 --rc geninfo_all_blocks=1 00:23:23.441 --rc geninfo_unexecuted_blocks=1 00:23:23.441 00:23:23.441 ' 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.441 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:23.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:23.442 ************************************ 00:23:23.442 START TEST nvmf_shutdown_tc1 00:23:23.442 ************************************ 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:23.442 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:25.344 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:25.344 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:25.344 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:25.345 Found net devices under 0000:09:00.0: cvl_0_0 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:25.345 Found net devices under 0000:09:00.1: cvl_0_1 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.345 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.603 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.603 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.603 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.603 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.603 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:23:25.604 00:23:25.604 --- 10.0.0.2 ping statistics --- 00:23:25.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.604 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:23:25.604 00:23:25.604 --- 10.0.0.1 ping statistics --- 00:23:25.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.604 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=873355 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 873355 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 873355 ']' 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.604 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.604 [2024-11-06 08:59:38.819164] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:23:25.604 [2024-11-06 08:59:38.819234] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.604 [2024-11-06 08:59:38.890286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.863 [2024-11-06 08:59:38.948942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.863 [2024-11-06 08:59:38.949002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.863 [2024-11-06 08:59:38.949016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.863 [2024-11-06 08:59:38.949028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.863 [2024-11-06 08:59:38.949037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.863 [2024-11-06 08:59:38.950589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.863 [2024-11-06 08:59:38.950643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.863 [2024-11-06 08:59:38.950694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:25.863 [2024-11-06 08:59:38.950697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.863 [2024-11-06 08:59:39.102050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.863 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.122 Malloc1 00:23:26.122 [2024-11-06 08:59:39.198200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.122 Malloc2 00:23:26.122 Malloc3 00:23:26.122 Malloc4 00:23:26.122 Malloc5 00:23:26.380 Malloc6 00:23:26.380 Malloc7 00:23:26.380 Malloc8 00:23:26.380 Malloc9 00:23:26.380 Malloc10 00:23:26.380 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.380 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:26.380 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:26.380 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=873525 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 873525 /var/tmp/bdevperf.sock 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 873525 ']' 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.638 { 00:23:26.638 "params": { 00:23:26.638 "name": "Nvme$subsystem", 00:23:26.638 "trtype": "$TEST_TRANSPORT", 00:23:26.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.638 "adrfam": "ipv4", 00:23:26.638 "trsvcid": "$NVMF_PORT", 00:23:26.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.638 "hdgst": ${hdgst:-false}, 00:23:26.638 "ddgst": ${ddgst:-false} 00:23:26.638 }, 00:23:26.638 "method": "bdev_nvme_attach_controller" 00:23:26.638 } 00:23:26.638 EOF 00:23:26.638 )") 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.638 { 00:23:26.638 "params": { 00:23:26.638 "name": "Nvme$subsystem", 00:23:26.638 "trtype": "$TEST_TRANSPORT", 00:23:26.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.638 "adrfam": "ipv4", 00:23:26.638 "trsvcid": "$NVMF_PORT", 00:23:26.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.638 "hdgst": ${hdgst:-false}, 00:23:26.638 "ddgst": ${ddgst:-false} 00:23:26.638 }, 00:23:26.638 "method": "bdev_nvme_attach_controller" 00:23:26.638 } 00:23:26.638 EOF 00:23:26.638 )") 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.638 { 00:23:26.638 "params": { 00:23:26.638 "name": "Nvme$subsystem", 00:23:26.638 "trtype": "$TEST_TRANSPORT", 00:23:26.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.638 "adrfam": "ipv4", 00:23:26.638 "trsvcid": "$NVMF_PORT", 00:23:26.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.638 "hdgst": ${hdgst:-false}, 00:23:26.638 "ddgst": ${ddgst:-false} 00:23:26.638 }, 00:23:26.638 "method": "bdev_nvme_attach_controller" 00:23:26.638 } 00:23:26.638 EOF 00:23:26.638 )") 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.638 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.638 { 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme$subsystem", 00:23:26.639 "trtype": "$TEST_TRANSPORT", 00:23:26.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "$NVMF_PORT", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.639 "hdgst": ${hdgst:-false}, 00:23:26.639 "ddgst": ${ddgst:-false} 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 } 00:23:26.639 EOF 00:23:26.639 )") 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.639 { 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme$subsystem", 00:23:26.639 "trtype": "$TEST_TRANSPORT", 00:23:26.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "$NVMF_PORT", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.639 "hdgst": ${hdgst:-false}, 00:23:26.639 "ddgst": ${ddgst:-false} 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 } 00:23:26.639 EOF 00:23:26.639 )") 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.639 { 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme$subsystem", 00:23:26.639 "trtype": "$TEST_TRANSPORT", 00:23:26.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "$NVMF_PORT", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.639 "hdgst": ${hdgst:-false}, 00:23:26.639 "ddgst": ${ddgst:-false} 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 } 00:23:26.639 EOF 00:23:26.639 )") 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.639 { 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme$subsystem", 00:23:26.639 "trtype": "$TEST_TRANSPORT", 00:23:26.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "$NVMF_PORT", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.639 "hdgst": ${hdgst:-false}, 00:23:26.639 "ddgst": ${ddgst:-false} 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 } 00:23:26.639 EOF 00:23:26.639 )") 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.639 { 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme$subsystem", 00:23:26.639 "trtype": "$TEST_TRANSPORT", 00:23:26.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "$NVMF_PORT", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.639 "hdgst": ${hdgst:-false}, 00:23:26.639 "ddgst": ${ddgst:-false} 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 } 00:23:26.639 EOF 00:23:26.639 )") 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.639 { 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme$subsystem", 00:23:26.639 "trtype": "$TEST_TRANSPORT", 00:23:26.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "$NVMF_PORT", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.639 "hdgst": ${hdgst:-false}, 00:23:26.639 "ddgst": ${ddgst:-false} 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 } 00:23:26.639 EOF 00:23:26.639 )") 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:26.639 { 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme$subsystem", 00:23:26.639 "trtype": "$TEST_TRANSPORT", 00:23:26.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "$NVMF_PORT", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.639 "hdgst": ${hdgst:-false}, 00:23:26.639 "ddgst": ${ddgst:-false} 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 } 00:23:26.639 EOF 00:23:26.639 )") 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:23:26.639 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme1", 00:23:26.639 "trtype": "tcp", 00:23:26.639 "traddr": "10.0.0.2", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "4420", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.639 "hdgst": false, 00:23:26.639 "ddgst": false 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 },{ 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme2", 00:23:26.639 "trtype": "tcp", 00:23:26.639 "traddr": "10.0.0.2", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "4420", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.639 "hdgst": false, 00:23:26.639 "ddgst": false 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 },{ 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme3", 00:23:26.639 "trtype": "tcp", 00:23:26.639 "traddr": "10.0.0.2", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "4420", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:26.639 "hdgst": false, 00:23:26.639 "ddgst": false 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 },{ 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme4", 00:23:26.639 "trtype": "tcp", 00:23:26.639 "traddr": "10.0.0.2", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "4420", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:26.639 "hdgst": false, 00:23:26.639 "ddgst": false 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 },{ 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme5", 00:23:26.639 "trtype": "tcp", 00:23:26.639 "traddr": "10.0.0.2", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "4420", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:26.639 "hdgst": false, 00:23:26.639 "ddgst": false 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 },{ 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme6", 00:23:26.639 "trtype": "tcp", 00:23:26.639 "traddr": "10.0.0.2", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "4420", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:26.639 "hdgst": false, 00:23:26.639 "ddgst": false 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 },{ 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme7", 00:23:26.639 "trtype": "tcp", 00:23:26.639 "traddr": "10.0.0.2", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "4420", 00:23:26.639 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:26.639 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:26.639 "hdgst": false, 00:23:26.639 "ddgst": false 00:23:26.639 }, 00:23:26.639 "method": "bdev_nvme_attach_controller" 00:23:26.639 },{ 00:23:26.639 "params": { 00:23:26.639 "name": "Nvme8", 00:23:26.639 "trtype": "tcp", 00:23:26.639 "traddr": "10.0.0.2", 00:23:26.639 "adrfam": "ipv4", 00:23:26.639 "trsvcid": "4420", 00:23:26.640 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:26.640 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:26.640 "hdgst": false, 00:23:26.640 "ddgst": false 00:23:26.640 }, 00:23:26.640 "method": "bdev_nvme_attach_controller" 00:23:26.640 },{ 00:23:26.640 "params": { 00:23:26.640 "name": "Nvme9", 00:23:26.640 "trtype": "tcp", 00:23:26.640 "traddr": "10.0.0.2", 00:23:26.640 "adrfam": "ipv4", 00:23:26.640 "trsvcid": "4420", 00:23:26.640 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:26.640 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:26.640 "hdgst": false, 00:23:26.640 "ddgst": false 00:23:26.640 }, 00:23:26.640 "method": "bdev_nvme_attach_controller" 00:23:26.640 },{ 00:23:26.640 "params": { 00:23:26.640 "name": "Nvme10", 00:23:26.640 "trtype": "tcp", 00:23:26.640 "traddr": "10.0.0.2", 00:23:26.640 "adrfam": "ipv4", 00:23:26.640 "trsvcid": "4420", 00:23:26.640 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:26.640 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:26.640 "hdgst": false, 00:23:26.640 "ddgst": false 00:23:26.640 }, 00:23:26.640 "method": "bdev_nvme_attach_controller" 00:23:26.640 }' 00:23:26.640 [2024-11-06 08:59:39.721765] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:23:26.640 [2024-11-06 08:59:39.721877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:26.640 [2024-11-06 08:59:39.796915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.640 [2024-11-06 08:59:39.856612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.538 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.538 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:28.538 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:28.538 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.538 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.538 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.538 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 873525 00:23:28.538 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:28.538 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:29.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 873525 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:29.911 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 873355 00:23:29.911 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:29.912 { 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme$subsystem", 00:23:29.912 "trtype": "$TEST_TRANSPORT", 00:23:29.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "$NVMF_PORT", 00:23:29.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.912 "hdgst": ${hdgst:-false}, 00:23:29.912 "ddgst": ${ddgst:-false} 00:23:29.912 }, 00:23:29.912 "method": "bdev_nvme_attach_controller" 00:23:29.912 } 00:23:29.912 EOF 00:23:29.912 )") 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:23:29.912 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:29.912 "params": { 00:23:29.912 "name": "Nvme1", 00:23:29.912 "trtype": "tcp", 00:23:29.912 "traddr": "10.0.0.2", 00:23:29.912 "adrfam": "ipv4", 00:23:29.912 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 },{ 00:23:29.913 "params": { 00:23:29.913 "name": "Nvme2", 00:23:29.913 "trtype": "tcp", 00:23:29.913 "traddr": "10.0.0.2", 00:23:29.913 "adrfam": "ipv4", 00:23:29.913 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 },{ 00:23:29.913 "params": { 00:23:29.913 "name": "Nvme3", 00:23:29.913 "trtype": "tcp", 00:23:29.913 "traddr": "10.0.0.2", 00:23:29.913 "adrfam": "ipv4", 00:23:29.913 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 },{ 00:23:29.913 "params": { 00:23:29.913 "name": "Nvme4", 00:23:29.913 "trtype": "tcp", 00:23:29.913 "traddr": "10.0.0.2", 00:23:29.913 "adrfam": "ipv4", 00:23:29.913 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 },{ 00:23:29.913 "params": { 00:23:29.913 "name": "Nvme5", 00:23:29.913 "trtype": "tcp", 00:23:29.913 "traddr": "10.0.0.2", 00:23:29.913 "adrfam": "ipv4", 00:23:29.913 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 },{ 00:23:29.913 "params": { 00:23:29.913 "name": "Nvme6", 00:23:29.913 "trtype": "tcp", 00:23:29.913 "traddr": "10.0.0.2", 00:23:29.913 "adrfam": "ipv4", 00:23:29.913 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 },{ 00:23:29.913 "params": { 00:23:29.913 "name": "Nvme7", 00:23:29.913 "trtype": "tcp", 00:23:29.913 "traddr": "10.0.0.2", 00:23:29.913 "adrfam": "ipv4", 00:23:29.913 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 },{ 00:23:29.913 "params": { 00:23:29.913 "name": "Nvme8", 00:23:29.913 "trtype": "tcp", 00:23:29.913 "traddr": "10.0.0.2", 00:23:29.913 "adrfam": "ipv4", 00:23:29.913 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 },{ 00:23:29.913 "params": { 00:23:29.913 "name": "Nvme9", 00:23:29.913 "trtype": "tcp", 00:23:29.913 "traddr": "10.0.0.2", 00:23:29.913 "adrfam": "ipv4", 00:23:29.913 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 },{ 00:23:29.913 "params": { 00:23:29.913 "name": "Nvme10", 00:23:29.913 "trtype": "tcp", 00:23:29.913 "traddr": "10.0.0.2", 00:23:29.913 "adrfam": "ipv4", 00:23:29.913 "trsvcid": "4420", 00:23:29.913 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:29.913 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:29.913 "hdgst": false, 00:23:29.913 "ddgst": false 00:23:29.913 }, 00:23:29.913 "method": "bdev_nvme_attach_controller" 00:23:29.913 }' 00:23:29.913 [2024-11-06 08:59:42.822998] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:23:29.913 [2024-11-06 08:59:42.823084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid873838 ] 00:23:29.913 [2024-11-06 08:59:42.896287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.913 [2024-11-06 08:59:42.958323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.811 Running I/O for 1 seconds... 00:23:32.744 1750.00 IOPS, 109.38 MiB/s 00:23:32.744 Latency(us) 00:23:32.744 [2024-11-06T07:59:46.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.744 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.744 Verification LBA range: start 0x0 length 0x400 00:23:32.744 Nvme1n1 : 1.11 236.08 14.76 0.00 0.00 267018.05 13981.01 231463.44 00:23:32.744 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.744 Verification LBA range: start 0x0 length 0x400 00:23:32.744 Nvme2n1 : 1.12 232.80 14.55 0.00 0.00 265923.06 2900.57 256318.58 00:23:32.744 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.744 Verification LBA range: start 0x0 length 0x400 00:23:32.744 Nvme3n1 : 1.10 233.23 14.58 0.00 0.00 262424.27 19029.71 265639.25 00:23:32.744 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.744 Verification LBA range: start 0x0 length 0x400 00:23:32.744 Nvme4n1 : 1.12 231.28 14.46 0.00 0.00 259920.02 2269.49 260978.92 00:23:32.744 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.744 Verification LBA range: start 0x0 length 0x400 00:23:32.744 Nvme5n1 : 1.16 224.03 14.00 0.00 0.00 264679.88 2560.76 281173.71 00:23:32.744 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.744 Verification LBA range: start 0x0 length 0x400 00:23:32.744 Nvme6n1 : 1.15 223.16 13.95 0.00 0.00 261105.78 19320.98 254765.13 00:23:32.744 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.744 Verification LBA range: start 0x0 length 0x400 00:23:32.744 Nvme7n1 : 1.11 230.12 14.38 0.00 0.00 247802.69 20777.34 271853.04 00:23:32.744 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.744 Verification LBA range: start 0x0 length 0x400 00:23:32.744 Nvme8n1 : 1.20 267.73 16.73 0.00 0.00 210644.54 14272.28 240784.12 00:23:32.744 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.744 Verification LBA range: start 0x0 length 0x400 00:23:32.744 Nvme9n1 : 1.19 215.51 13.47 0.00 0.00 257538.28 39418.69 264085.81 00:23:32.745 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.745 Verification LBA range: start 0x0 length 0x400 00:23:32.745 Nvme10n1 : 1.20 266.02 16.63 0.00 0.00 205673.97 6456.51 284280.60 00:23:32.745 [2024-11-06T07:59:46.034Z] =================================================================================================================== 00:23:32.745 [2024-11-06T07:59:46.034Z] Total : 2359.97 147.50 0.00 0.00 248369.90 2269.49 284280.60 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.002 rmmod nvme_tcp 00:23:33.002 rmmod nvme_fabrics 00:23:33.002 rmmod nvme_keyring 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 873355 ']' 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 873355 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 873355 ']' 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 873355 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 873355 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 873355' 00:23:33.002 killing process with pid 873355 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 873355 00:23:33.002 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 873355 00:23:33.568 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.569 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.103 00:23:36.103 real 0m12.354s 00:23:36.103 user 0m36.472s 00:23:36.103 sys 0m3.301s 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:36.103 ************************************ 00:23:36.103 END TEST nvmf_shutdown_tc1 00:23:36.103 ************************************ 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:36.103 ************************************ 00:23:36.103 START TEST nvmf_shutdown_tc2 00:23:36.103 ************************************ 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.103 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:36.104 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:36.104 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:36.104 Found net devices under 0000:09:00.0: cvl_0_0 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:36.104 Found net devices under 0000:09:00.1: cvl_0_1 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:36.104 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:36.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:23:36.104 00:23:36.104 --- 10.0.0.2 ping statistics --- 00:23:36.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.104 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:23:36.104 00:23:36.104 --- 10.0.0.1 ping statistics --- 00:23:36.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.104 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=874724 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 874724 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 874724 ']' 00:23:36.104 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.105 [2024-11-06 08:59:49.106610] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:23:36.105 [2024-11-06 08:59:49.106681] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.105 [2024-11-06 08:59:49.177144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:36.105 [2024-11-06 08:59:49.233174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.105 [2024-11-06 08:59:49.233223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.105 [2024-11-06 08:59:49.233246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.105 [2024-11-06 08:59:49.233256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.105 [2024-11-06 08:59:49.233266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.105 [2024-11-06 08:59:49.234729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.105 [2024-11-06 08:59:49.234790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:36.105 [2024-11-06 08:59:49.234857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:36.105 [2024-11-06 08:59:49.234861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.105 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.105 [2024-11-06 08:59:49.385501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.363 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.364 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.364 Malloc1 00:23:36.364 [2024-11-06 08:59:49.491994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.364 Malloc2 00:23:36.364 Malloc3 00:23:36.364 Malloc4 00:23:36.621 Malloc5 00:23:36.621 Malloc6 00:23:36.621 Malloc7 00:23:36.621 Malloc8 00:23:36.621 Malloc9 00:23:36.880 Malloc10 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=874904 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 874904 /var/tmp/bdevperf.sock 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 874904 ']' 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.880 { 00:23:36.880 "params": { 00:23:36.880 "name": "Nvme$subsystem", 00:23:36.880 "trtype": "$TEST_TRANSPORT", 00:23:36.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.880 "adrfam": "ipv4", 00:23:36.880 "trsvcid": "$NVMF_PORT", 00:23:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.880 "hdgst": ${hdgst:-false}, 00:23:36.880 "ddgst": ${ddgst:-false} 00:23:36.880 }, 00:23:36.880 "method": "bdev_nvme_attach_controller" 00:23:36.880 } 00:23:36.880 EOF 00:23:36.880 )") 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.880 { 00:23:36.880 "params": { 00:23:36.880 "name": "Nvme$subsystem", 00:23:36.880 "trtype": "$TEST_TRANSPORT", 00:23:36.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.880 "adrfam": "ipv4", 00:23:36.880 "trsvcid": "$NVMF_PORT", 00:23:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.880 "hdgst": ${hdgst:-false}, 00:23:36.880 "ddgst": ${ddgst:-false} 00:23:36.880 }, 00:23:36.880 "method": "bdev_nvme_attach_controller" 00:23:36.880 } 00:23:36.880 EOF 00:23:36.880 )") 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.880 { 00:23:36.880 "params": { 00:23:36.880 "name": "Nvme$subsystem", 00:23:36.880 "trtype": "$TEST_TRANSPORT", 00:23:36.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.880 "adrfam": "ipv4", 00:23:36.880 "trsvcid": "$NVMF_PORT", 00:23:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.880 "hdgst": ${hdgst:-false}, 00:23:36.880 "ddgst": ${ddgst:-false} 00:23:36.880 }, 00:23:36.880 "method": "bdev_nvme_attach_controller" 00:23:36.880 } 00:23:36.880 EOF 00:23:36.880 )") 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.880 { 00:23:36.880 "params": { 00:23:36.880 "name": "Nvme$subsystem", 00:23:36.880 "trtype": "$TEST_TRANSPORT", 00:23:36.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.880 "adrfam": "ipv4", 00:23:36.880 "trsvcid": "$NVMF_PORT", 00:23:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.880 "hdgst": ${hdgst:-false}, 00:23:36.880 "ddgst": ${ddgst:-false} 00:23:36.880 }, 00:23:36.880 "method": "bdev_nvme_attach_controller" 00:23:36.880 } 00:23:36.880 EOF 00:23:36.880 )") 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.880 { 00:23:36.880 "params": { 00:23:36.880 "name": "Nvme$subsystem", 00:23:36.880 "trtype": "$TEST_TRANSPORT", 00:23:36.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.880 "adrfam": "ipv4", 00:23:36.880 "trsvcid": "$NVMF_PORT", 00:23:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.880 "hdgst": ${hdgst:-false}, 00:23:36.880 "ddgst": ${ddgst:-false} 00:23:36.880 }, 00:23:36.880 "method": "bdev_nvme_attach_controller" 00:23:36.880 } 00:23:36.880 EOF 00:23:36.880 )") 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.880 { 00:23:36.880 "params": { 00:23:36.880 "name": "Nvme$subsystem", 00:23:36.880 "trtype": "$TEST_TRANSPORT", 00:23:36.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.880 "adrfam": "ipv4", 00:23:36.880 "trsvcid": "$NVMF_PORT", 00:23:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.880 "hdgst": ${hdgst:-false}, 00:23:36.880 "ddgst": ${ddgst:-false} 00:23:36.880 }, 00:23:36.880 "method": "bdev_nvme_attach_controller" 00:23:36.880 } 00:23:36.880 EOF 00:23:36.880 )") 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.880 { 00:23:36.880 "params": { 00:23:36.880 "name": "Nvme$subsystem", 00:23:36.880 "trtype": "$TEST_TRANSPORT", 00:23:36.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.880 "adrfam": "ipv4", 00:23:36.880 "trsvcid": "$NVMF_PORT", 00:23:36.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.880 "hdgst": ${hdgst:-false}, 00:23:36.880 "ddgst": ${ddgst:-false} 00:23:36.880 }, 00:23:36.880 "method": "bdev_nvme_attach_controller" 00:23:36.880 } 00:23:36.880 EOF 00:23:36.880 )") 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.880 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.881 { 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme$subsystem", 00:23:36.881 "trtype": "$TEST_TRANSPORT", 00:23:36.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "$NVMF_PORT", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.881 "hdgst": ${hdgst:-false}, 00:23:36.881 "ddgst": ${ddgst:-false} 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 } 00:23:36.881 EOF 00:23:36.881 )") 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.881 { 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme$subsystem", 00:23:36.881 "trtype": "$TEST_TRANSPORT", 00:23:36.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "$NVMF_PORT", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.881 "hdgst": ${hdgst:-false}, 00:23:36.881 "ddgst": ${ddgst:-false} 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 } 00:23:36.881 EOF 00:23:36.881 )") 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:36.881 { 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme$subsystem", 00:23:36.881 "trtype": "$TEST_TRANSPORT", 00:23:36.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "$NVMF_PORT", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.881 "hdgst": ${hdgst:-false}, 00:23:36.881 "ddgst": ${ddgst:-false} 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 } 00:23:36.881 EOF 00:23:36.881 )") 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:23:36.881 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme1", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 },{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme2", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 },{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme3", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 },{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme4", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 },{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme5", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 },{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme6", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 },{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme7", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 },{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme8", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 },{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme9", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 },{ 00:23:36.881 "params": { 00:23:36.881 "name": "Nvme10", 00:23:36.881 "trtype": "tcp", 00:23:36.881 "traddr": "10.0.0.2", 00:23:36.881 "adrfam": "ipv4", 00:23:36.881 "trsvcid": "4420", 00:23:36.881 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:36.881 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:36.881 "hdgst": false, 00:23:36.881 "ddgst": false 00:23:36.881 }, 00:23:36.881 "method": "bdev_nvme_attach_controller" 00:23:36.881 }' 00:23:36.881 [2024-11-06 08:59:50.024102] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:23:36.881 [2024-11-06 08:59:50.024222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid874904 ] 00:23:36.881 [2024-11-06 08:59:50.097337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.881 [2024-11-06 08:59:50.157888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.780 Running I/O for 10 seconds... 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:39.039 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=135 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 874904 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 874904 ']' 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 874904 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 874904 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 874904' 00:23:39.297 killing process with pid 874904 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 874904 00:23:39.297 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 874904 00:23:39.297 Received shutdown signal, test time was about 0.853708 seconds 00:23:39.297 00:23:39.297 Latency(us) 00:23:39.297 [2024-11-06T07:59:52.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.297 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.297 Verification LBA range: start 0x0 length 0x400 00:23:39.297 Nvme1n1 : 0.83 259.79 16.24 0.00 0.00 238093.23 11505.21 254765.13 00:23:39.297 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.297 Verification LBA range: start 0x0 length 0x400 00:23:39.297 Nvme2n1 : 0.84 228.17 14.26 0.00 0.00 270784.28 20777.34 253211.69 00:23:39.297 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.297 Verification LBA range: start 0x0 length 0x400 00:23:39.297 Nvme3n1 : 0.81 237.86 14.87 0.00 0.00 253300.69 25437.68 250104.79 00:23:39.297 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.297 Verification LBA range: start 0x0 length 0x400 00:23:39.297 Nvme4n1 : 0.81 236.10 14.76 0.00 0.00 248460.33 19223.89 246997.90 00:23:39.297 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.297 Verification LBA range: start 0x0 length 0x400 00:23:39.297 Nvme5n1 : 0.84 229.78 14.36 0.00 0.00 250639.04 19806.44 256318.58 00:23:39.297 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.297 Verification LBA range: start 0x0 length 0x400 00:23:39.297 Nvme6n1 : 0.82 233.68 14.61 0.00 0.00 239111.84 23981.32 228356.55 00:23:39.297 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.297 Verification LBA range: start 0x0 length 0x400 00:23:39.297 Nvme7n1 : 0.83 232.27 14.52 0.00 0.00 235300.03 18835.53 251658.24 00:23:39.298 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.298 Verification LBA range: start 0x0 length 0x400 00:23:39.298 Nvme8n1 : 0.85 226.97 14.19 0.00 0.00 235722.27 22622.06 254765.13 00:23:39.298 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.298 Verification LBA range: start 0x0 length 0x400 00:23:39.298 Nvme9n1 : 0.85 226.04 14.13 0.00 0.00 230970.15 20874.43 259425.47 00:23:39.298 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.298 Verification LBA range: start 0x0 length 0x400 00:23:39.298 Nvme10n1 : 0.85 225.12 14.07 0.00 0.00 226438.26 22039.51 282727.16 00:23:39.298 [2024-11-06T07:59:52.587Z] =================================================================================================================== 00:23:39.298 [2024-11-06T07:59:52.587Z] Total : 2335.77 145.99 0.00 0.00 242822.89 11505.21 282727.16 00:23:39.556 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 874724 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.927 rmmod nvme_tcp 00:23:40.927 rmmod nvme_fabrics 00:23:40.927 rmmod nvme_keyring 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 874724 ']' 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 874724 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 874724 ']' 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 874724 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 874724 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 874724' 00:23:40.927 killing process with pid 874724 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 874724 00:23:40.927 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 874724 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.185 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.093 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.093 00:23:43.093 real 0m7.503s 00:23:43.093 user 0m22.686s 00:23:43.093 sys 0m1.475s 00:23:43.093 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.093 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.093 ************************************ 00:23:43.093 END TEST nvmf_shutdown_tc2 00:23:43.093 ************************************ 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:43.354 ************************************ 00:23:43.354 START TEST nvmf_shutdown_tc3 00:23:43.354 ************************************ 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.354 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:43.355 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:43.355 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:43.355 Found net devices under 0000:09:00.0: cvl_0_0 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:43.355 Found net devices under 0000:09:00.1: cvl_0_1 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:43.355 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:43.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:23:43.355 00:23:43.356 --- 10.0.0.2 ping statistics --- 00:23:43.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.356 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:23:43.356 00:23:43.356 --- 10.0.0.1 ping statistics --- 00:23:43.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.356 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=875694 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 875694 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 875694 ']' 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:43.356 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.615 [2024-11-06 08:59:56.670282] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:23:43.615 [2024-11-06 08:59:56.670377] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.615 [2024-11-06 08:59:56.744216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.615 [2024-11-06 08:59:56.800724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.615 [2024-11-06 08:59:56.800779] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.615 [2024-11-06 08:59:56.800801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.615 [2024-11-06 08:59:56.800812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.615 [2024-11-06 08:59:56.800821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.615 [2024-11-06 08:59:56.802285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.615 [2024-11-06 08:59:56.802350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.615 [2024-11-06 08:59:56.802419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:43.615 [2024-11-06 08:59:56.802422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.873 [2024-11-06 08:59:56.953736] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.873 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.874 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:43.874 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:43.874 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:43.874 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.874 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.874 Malloc1 00:23:43.874 [2024-11-06 08:59:57.055606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.874 Malloc2 00:23:43.874 Malloc3 00:23:44.131 Malloc4 00:23:44.132 Malloc5 00:23:44.132 Malloc6 00:23:44.132 Malloc7 00:23:44.132 Malloc8 00:23:44.390 Malloc9 00:23:44.390 Malloc10 00:23:44.390 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.390 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:44.390 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=875874 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 875874 /var/tmp/bdevperf.sock 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 875874 ']' 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:23:44.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.391 "params": { 00:23:44.391 "name": "Nvme$subsystem", 00:23:44.391 "trtype": "$TEST_TRANSPORT", 00:23:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.391 "adrfam": "ipv4", 00:23:44.391 "trsvcid": "$NVMF_PORT", 00:23:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.391 "hdgst": ${hdgst:-false}, 00:23:44.391 "ddgst": ${ddgst:-false} 00:23:44.391 }, 00:23:44.391 "method": "bdev_nvme_attach_controller" 00:23:44.391 } 00:23:44.391 EOF 00:23:44.391 )") 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.391 "params": { 00:23:44.391 "name": "Nvme$subsystem", 00:23:44.391 "trtype": "$TEST_TRANSPORT", 00:23:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.391 "adrfam": "ipv4", 00:23:44.391 "trsvcid": "$NVMF_PORT", 00:23:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.391 "hdgst": ${hdgst:-false}, 00:23:44.391 "ddgst": ${ddgst:-false} 00:23:44.391 }, 00:23:44.391 "method": "bdev_nvme_attach_controller" 00:23:44.391 } 00:23:44.391 EOF 00:23:44.391 )") 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.391 "params": { 00:23:44.391 "name": "Nvme$subsystem", 00:23:44.391 "trtype": "$TEST_TRANSPORT", 00:23:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.391 "adrfam": "ipv4", 00:23:44.391 "trsvcid": "$NVMF_PORT", 00:23:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.391 "hdgst": ${hdgst:-false}, 00:23:44.391 "ddgst": ${ddgst:-false} 00:23:44.391 }, 00:23:44.391 "method": "bdev_nvme_attach_controller" 00:23:44.391 } 00:23:44.391 EOF 00:23:44.391 )") 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.391 "params": { 00:23:44.391 "name": "Nvme$subsystem", 00:23:44.391 "trtype": "$TEST_TRANSPORT", 00:23:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.391 "adrfam": "ipv4", 00:23:44.391 "trsvcid": "$NVMF_PORT", 00:23:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.391 "hdgst": ${hdgst:-false}, 00:23:44.391 "ddgst": ${ddgst:-false} 00:23:44.391 }, 00:23:44.391 "method": "bdev_nvme_attach_controller" 00:23:44.391 } 00:23:44.391 EOF 00:23:44.391 )") 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.391 "params": { 00:23:44.391 "name": "Nvme$subsystem", 00:23:44.391 "trtype": "$TEST_TRANSPORT", 00:23:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.391 "adrfam": "ipv4", 00:23:44.391 "trsvcid": "$NVMF_PORT", 00:23:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.391 "hdgst": ${hdgst:-false}, 00:23:44.391 "ddgst": ${ddgst:-false} 00:23:44.391 }, 00:23:44.391 "method": "bdev_nvme_attach_controller" 00:23:44.391 } 00:23:44.391 EOF 00:23:44.391 )") 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.391 "params": { 00:23:44.391 "name": "Nvme$subsystem", 00:23:44.391 "trtype": "$TEST_TRANSPORT", 00:23:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.391 "adrfam": "ipv4", 00:23:44.391 "trsvcid": "$NVMF_PORT", 00:23:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.391 "hdgst": ${hdgst:-false}, 00:23:44.391 "ddgst": ${ddgst:-false} 00:23:44.391 }, 00:23:44.391 "method": "bdev_nvme_attach_controller" 00:23:44.391 } 00:23:44.391 EOF 00:23:44.391 )") 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.391 "params": { 00:23:44.391 "name": "Nvme$subsystem", 00:23:44.391 "trtype": "$TEST_TRANSPORT", 00:23:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.391 "adrfam": "ipv4", 00:23:44.391 "trsvcid": "$NVMF_PORT", 00:23:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.391 "hdgst": ${hdgst:-false}, 00:23:44.391 "ddgst": ${ddgst:-false} 00:23:44.391 }, 00:23:44.391 "method": "bdev_nvme_attach_controller" 00:23:44.391 } 00:23:44.391 EOF 00:23:44.391 )") 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.391 "params": { 00:23:44.391 "name": "Nvme$subsystem", 00:23:44.391 "trtype": "$TEST_TRANSPORT", 00:23:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.391 "adrfam": "ipv4", 00:23:44.391 "trsvcid": "$NVMF_PORT", 00:23:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.391 "hdgst": ${hdgst:-false}, 00:23:44.391 "ddgst": ${ddgst:-false} 00:23:44.391 }, 00:23:44.391 "method": "bdev_nvme_attach_controller" 00:23:44.391 } 00:23:44.391 EOF 00:23:44.391 )") 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.391 "params": { 00:23:44.391 "name": "Nvme$subsystem", 00:23:44.391 "trtype": "$TEST_TRANSPORT", 00:23:44.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.391 "adrfam": "ipv4", 00:23:44.391 "trsvcid": "$NVMF_PORT", 00:23:44.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.391 "hdgst": ${hdgst:-false}, 00:23:44.391 "ddgst": ${ddgst:-false} 00:23:44.391 }, 00:23:44.391 "method": "bdev_nvme_attach_controller" 00:23:44.391 } 00:23:44.391 EOF 00:23:44.391 )") 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:23:44.391 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:23:44.391 { 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme$subsystem", 00:23:44.392 "trtype": "$TEST_TRANSPORT", 00:23:44.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "$NVMF_PORT", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.392 "hdgst": ${hdgst:-false}, 00:23:44.392 "ddgst": ${ddgst:-false} 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 } 00:23:44.392 EOF 00:23:44.392 )") 00:23:44.392 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:23:44.392 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:23:44.392 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:23:44.392 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme1", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 },{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme2", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 },{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme3", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 },{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme4", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 },{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme5", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 },{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme6", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 },{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme7", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 },{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme8", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 },{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme9", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 },{ 00:23:44.392 "params": { 00:23:44.392 "name": "Nvme10", 00:23:44.392 "trtype": "tcp", 00:23:44.392 "traddr": "10.0.0.2", 00:23:44.392 "adrfam": "ipv4", 00:23:44.392 "trsvcid": "4420", 00:23:44.392 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:44.392 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:44.392 "hdgst": false, 00:23:44.392 "ddgst": false 00:23:44.392 }, 00:23:44.392 "method": "bdev_nvme_attach_controller" 00:23:44.392 }' 00:23:44.392 [2024-11-06 08:59:57.558772] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:23:44.392 [2024-11-06 08:59:57.558870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid875874 ] 00:23:44.392 [2024-11-06 08:59:57.631403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.650 [2024-11-06 08:59:57.691359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.549 Running I/O for 10 seconds... 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:46.549 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=72 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:23:46.837 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=138 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 138 -ge 100 ']' 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 875694 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 875694 ']' 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 875694 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 875694 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 875694' 00:23:47.135 killing process with pid 875694 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 875694 00:23:47.135 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 875694 00:23:47.135 [2024-11-06 09:00:00.240035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.135 [2024-11-06 09:00:00.240756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.240973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8640 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.242991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b0f20 is same with the state(6) to be set 00:23:47.136 [2024-11-06 09:00:00.243497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.137 [2024-11-06 09:00:00.243542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.137 [2024-11-06 09:00:00.243560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.137 [2024-11-06 09:00:00.243575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.137 [2024-11-06 09:00:00.243590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.137 [2024-11-06 09:00:00.243604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.137 [2024-11-06 09:00:00.243619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.137 [2024-11-06 09:00:00.243633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.137 [2024-11-06 09:00:00.243646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b56f0 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.247293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8b30 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.247328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8b30 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.247343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8b30 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.247357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d8b30 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.248989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.249669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9000 is same with the state(6) to be set 00:23:47.137 [2024-11-06 09:00:00.250614] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.137 [2024-11-06 09:00:00.250674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.137 [2024-11-06 09:00:00.250697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.137 [2024-11-06 09:00:00.250725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.137 [2024-11-06 09:00:00.250741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.250759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.250774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.250791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.250807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.250823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.250848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.250869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.250892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.250909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.250924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.250941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.250956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.250973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.250988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t[2024-11-06 09:00:00.251300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.138 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-11-06 09:00:00.251355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 he state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t[2024-11-06 09:00:00.251371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.138 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:12[2024-11-06 09:00:00.251430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 he state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-06 09:00:00.251546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 he state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-06 09:00:00.251584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 he state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.138 [2024-11-06 09:00:00.251611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.138 [2024-11-06 09:00:00.251618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.138 [2024-11-06 09:00:00.251624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 [2024-11-06 09:00:00.251643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12[2024-11-06 09:00:00.251669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 he state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t[2024-11-06 09:00:00.251684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.139 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 [2024-11-06 09:00:00.251711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12[2024-11-06 09:00:00.251736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 he state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 [2024-11-06 09:00:00.251781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12[2024-11-06 09:00:00.251806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 he state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t[2024-11-06 09:00:00.251820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.139 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t[2024-11-06 09:00:00.251848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12he state(6) to be set 00:23:47.139 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 [2024-11-06 09:00:00.251864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t[2024-11-06 09:00:00.251865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.139 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 [2024-11-06 09:00:00.251895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:12[2024-11-06 09:00:00.251920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 he state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t[2024-11-06 09:00:00.251934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.139 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 [2024-11-06 09:00:00.251961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.251973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.251987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:12[2024-11-06 09:00:00.251989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 he state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.252003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t[2024-11-06 09:00:00.252003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.139 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.252018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.252022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 [2024-11-06 09:00:00.252030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.252038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.252042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.252054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.252055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 [2024-11-06 09:00:00.252066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.252070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.252078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.252088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.139 [2024-11-06 09:00:00.252095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.252103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.139 [2024-11-06 09:00:00.252108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.139 [2024-11-06 09:00:00.252120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with t[2024-11-06 09:00:00.252121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:12he state(6) to be set 00:23:47.140 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.252148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9380 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.252165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.140 [2024-11-06 09:00:00.252863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.140 [2024-11-06 09:00:00.252999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.140 [2024-11-06 09:00:00.253283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.253799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9850 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.254217] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.141 [2024-11-06 09:00:00.254322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc28420 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.254503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b56f0 (9): Bad file descriptor 00:23:47.141 [2024-11-06 09:00:00.254558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5270 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.254730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2ab0 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.254932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.254999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.141 [2024-11-06 09:00:00.254995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.255014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.141 [2024-11-06 09:00:00.255023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.141 [2024-11-06 09:00:00.255029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-11-06 09:00:00.255038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5ff0 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-11-06 09:00:00.255116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-11-06 09:00:00.255154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-11-06 09:00:00.255180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.142 [2024-11-06 09:00:00.255218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a96c0 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with t[2024-11-06 09:00:00.255310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128he state(6) to be set 00:23:47.142 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-06 09:00:00.255412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 he state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12[2024-11-06 09:00:00.255506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 he state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with t[2024-11-06 09:00:00.255521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.142 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 [2024-11-06 09:00:00.255718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.142 [2024-11-06 09:00:00.255731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.142 [2024-11-06 09:00:00.255741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:12[2024-11-06 09:00:00.255744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.142 he state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with t[2024-11-06 09:00:00.255759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.143 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.255773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.255786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.255799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with t[2024-11-06 09:00:00.255812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:12he state(6) to be set 00:23:47.143 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.255826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.255847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:12[2024-11-06 09:00:00.255861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 he state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-06 09:00:00.255885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 he state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.255912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.255925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with t[2024-11-06 09:00:00.255937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:12he state(6) to be set 00:23:47.143 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.255952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.255964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d9d20 is same with the state(6) to be set 00:23:47.143 [2024-11-06 09:00:00.255973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.255988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.143 [2024-11-06 09:00:00.256893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.143 [2024-11-06 09:00:00.256910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.256924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.256941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.256955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.256971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.256986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668c90 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.257285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668c90 is same with t[2024-11-06 09:00:00.257302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:12he state(6) to be set 00:23:47.144 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668c90 is same with t[2024-11-06 09:00:00.257318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.144 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x668c90 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.257336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.144 [2024-11-06 09:00:00.257510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.144 [2024-11-06 09:00:00.257525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5b0a0 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.144 [2024-11-06 09:00:00.258627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.258992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669160 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t[2024-11-06 09:00:00.259860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.145 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t[2024-11-06 09:00:00.259887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1he state(6) to be set 00:23:47.145 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-06 09:00:00.259905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 he state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-06 09:00:00.259949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 he state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.259976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 [2024-11-06 09:00:00.259989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.259998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.145 [2024-11-06 09:00:00.260002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.145 [2024-11-06 09:00:00.260013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-06 09:00:00.260015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.145 he state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-06 09:00:00.260079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 he state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-11-06 09:00:00.260175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 he state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t[2024-11-06 09:00:00.260190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.146 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1[2024-11-06 09:00:00.260242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 he state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t[2024-11-06 09:00:00.260271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.146 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1[2024-11-06 09:00:00.260364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 he state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t[2024-11-06 09:00:00.260379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.146 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-11-06 09:00:00.260431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 he state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t[2024-11-06 09:00:00.260445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.146 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t[2024-11-06 09:00:00.260600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1he state(6) to be set 00:23:47.146 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t[2024-11-06 09:00:00.260620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.146 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.146 [2024-11-06 09:00:00.260635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.146 [2024-11-06 09:00:00.260639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.146 [2024-11-06 09:00:00.260648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.147 [2024-11-06 09:00:00.260654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.147 [2024-11-06 09:00:00.260670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-11-06 09:00:00.260673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 he state(6) to be set 00:23:47.147 [2024-11-06 09:00:00.260686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with t[2024-11-06 09:00:00.260687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.147 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.147 [2024-11-06 09:00:00.260704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.260713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.147 [2024-11-06 09:00:00.260719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.147 [2024-11-06 09:00:00.260735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.260741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.147 [2024-11-06 09:00:00.260749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x669630 is same with the state(6) to be set 00:23:47.147 [2024-11-06 09:00:00.260765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.260780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.260827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.260868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.260904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.260935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.260966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.260983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.260997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.261013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.261027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.261049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.261064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.261081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.261095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.261110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.261138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.261154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.273737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.273857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.273891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.273911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.273927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.273945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.273960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.273977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.273992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.274023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.274055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.274086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.274126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.274157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.274196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.274227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.147 [2024-11-06 09:00:00.274274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:47.147 [2024-11-06 09:00:00.274717] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:47.147 [2024-11-06 09:00:00.274790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a96c0 (9): Bad file descriptor 00:23:47.147 [2024-11-06 09:00:00.274896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.147 [2024-11-06 09:00:00.274920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.147 [2024-11-06 09:00:00.274952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.147 [2024-11-06 09:00:00.274981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.274996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.147 [2024-11-06 09:00:00.275010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.147 [2024-11-06 09:00:00.275023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2e2d0 is same with the state(6) to be set 00:23:47.147 [2024-11-06 09:00:00.275057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc28420 (9): Bad file descriptor 00:23:47.147 [2024-11-06 09:00:00.275104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.147 [2024-11-06 09:00:00.275135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03910 is same with the state(6) to be set 00:23:47.148 [2024-11-06 09:00:00.275277] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:47.148 [2024-11-06 09:00:00.275303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b5270 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.275344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b2ab0 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.275377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe5ff0 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.275430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda0e0 is same with the state(6) to be set 00:23:47.148 [2024-11-06 09:00:00.275595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.148 [2024-11-06 09:00:00.275704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.275717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d110 is same with the state(6) to be set 00:23:47.148 [2024-11-06 09:00:00.278631] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:47.148 [2024-11-06 09:00:00.278674] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:47.148 [2024-11-06 09:00:00.278698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbda0e0 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.279675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.148 [2024-11-06 09:00:00.279706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a96c0 with addr=10.0.0.2, port=4420 00:23:47.148 [2024-11-06 09:00:00.279725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a96c0 is same with the state(6) to be set 00:23:47.148 [2024-11-06 09:00:00.279817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.148 [2024-11-06 09:00:00.279852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b56f0 with addr=10.0.0.2, port=4420 00:23:47.148 [2024-11-06 09:00:00.279881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b56f0 is same with the state(6) to be set 00:23:47.148 [2024-11-06 09:00:00.280289] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.148 [2024-11-06 09:00:00.280366] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.148 [2024-11-06 09:00:00.281631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.148 [2024-11-06 09:00:00.281662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbda0e0 with addr=10.0.0.2, port=4420 00:23:47.148 [2024-11-06 09:00:00.281680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda0e0 is same with the state(6) to be set 00:23:47.148 [2024-11-06 09:00:00.281700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a96c0 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.281721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b56f0 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.281900] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.148 [2024-11-06 09:00:00.281973] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.148 [2024-11-06 09:00:00.282215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.148 [2024-11-06 09:00:00.282242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.282267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.148 [2024-11-06 09:00:00.282284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.282303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.148 [2024-11-06 09:00:00.282319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.282336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.148 [2024-11-06 09:00:00.282351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.282368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.148 [2024-11-06 09:00:00.282383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.282400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.148 [2024-11-06 09:00:00.282415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.148 [2024-11-06 09:00:00.282430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbba880 is same with the state(6) to be set 00:23:47.148 [2024-11-06 09:00:00.282563] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.148 [2024-11-06 09:00:00.282612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbda0e0 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.282636] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:47.148 [2024-11-06 09:00:00.282651] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:47.148 [2024-11-06 09:00:00.282673] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:47.148 [2024-11-06 09:00:00.282697] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:47.148 [2024-11-06 09:00:00.282712] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:47.148 [2024-11-06 09:00:00.282725] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:47.148 [2024-11-06 09:00:00.283727] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:47.148 [2024-11-06 09:00:00.283755] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:47.148 [2024-11-06 09:00:00.283772] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:47.148 [2024-11-06 09:00:00.283799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc03910 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.283821] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:47.148 [2024-11-06 09:00:00.283843] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:47.148 [2024-11-06 09:00:00.283859] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:47.148 [2024-11-06 09:00:00.283945] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:47.148 [2024-11-06 09:00:00.284364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.148 [2024-11-06 09:00:00.284394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc03910 with addr=10.0.0.2, port=4420 00:23:47.148 [2024-11-06 09:00:00.284412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03910 is same with the state(6) to be set 00:23:47.148 [2024-11-06 09:00:00.284477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc03910 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.284543] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:47.148 [2024-11-06 09:00:00.284562] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:47.148 [2024-11-06 09:00:00.284577] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:47.148 [2024-11-06 09:00:00.284633] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:47.148 [2024-11-06 09:00:00.284751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2e2d0 (9): Bad file descriptor 00:23:47.148 [2024-11-06 09:00:00.284820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d110 (9): Bad file descriptor 00:23:47.149 [2024-11-06 09:00:00.284959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.284983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.285969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.149 [2024-11-06 09:00:00.285985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.149 [2024-11-06 09:00:00.286001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.150 [2024-11-06 09:00:00.286948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.150 [2024-11-06 09:00:00.286965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.151 [2024-11-06 09:00:00.286980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.151 [2024-11-06 09:00:00.286997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.151 [2024-11-06 09:00:00.287012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.151 [2024-11-06 09:00:00.287029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.287044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.287060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.287075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.287090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ba390 is same with the state(6) to be set 00:23:47.152 [2024-11-06 09:00:00.288380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.288404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.288430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.288448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.288465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.288480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.288497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.288512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.288529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.288544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.288561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.288576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.288594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.288609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.288626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.288641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.288657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.152 [2024-11-06 09:00:00.288673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.152 [2024-11-06 09:00:00.288689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.288704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.288722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.288736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.288753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.288769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.288786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.288801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.288818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.288857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.288876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.288892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.288909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.288924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.288940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.288956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.288973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.288988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.289004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.289019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.289036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.289051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.289068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.289083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.289100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.289115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.153 [2024-11-06 09:00:00.289142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.153 [2024-11-06 09:00:00.289157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.154 [2024-11-06 09:00:00.289536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.154 [2024-11-06 09:00:00.289551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.155 [2024-11-06 09:00:00.289568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.155 [2024-11-06 09:00:00.289583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.155 [2024-11-06 09:00:00.289599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.155 [2024-11-06 09:00:00.289614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.155 [2024-11-06 09:00:00.289631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.155 [2024-11-06 09:00:00.289645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.155 [2024-11-06 09:00:00.289663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.155 [2024-11-06 09:00:00.289681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.155 [2024-11-06 09:00:00.289698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.155 [2024-11-06 09:00:00.289713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.155 [2024-11-06 09:00:00.289730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.155 [2024-11-06 09:00:00.289745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.155 [2024-11-06 09:00:00.289762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.155 [2024-11-06 09:00:00.289777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.155 [2024-11-06 09:00:00.289793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.155 [2024-11-06 09:00:00.289808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.156 [2024-11-06 09:00:00.289842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.156 [2024-11-06 09:00:00.289858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.156 [2024-11-06 09:00:00.289875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.156 [2024-11-06 09:00:00.289890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.156 [2024-11-06 09:00:00.289907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.156 [2024-11-06 09:00:00.289923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.156 [2024-11-06 09:00:00.289939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.156 [2024-11-06 09:00:00.289954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.156 [2024-11-06 09:00:00.289971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.156 [2024-11-06 09:00:00.289986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.156 [2024-11-06 09:00:00.290003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.156 [2024-11-06 09:00:00.290018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.156 [2024-11-06 09:00:00.290035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.156 [2024-11-06 09:00:00.290050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.156 [2024-11-06 09:00:00.290066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.156 [2024-11-06 09:00:00.290081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.157 [2024-11-06 09:00:00.290103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.157 [2024-11-06 09:00:00.290119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.157 [2024-11-06 09:00:00.290146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.157 [2024-11-06 09:00:00.290162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.157 [2024-11-06 09:00:00.290179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.157 [2024-11-06 09:00:00.290197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.157 [2024-11-06 09:00:00.290214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.157 [2024-11-06 09:00:00.290229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.157 [2024-11-06 09:00:00.290246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.157 [2024-11-06 09:00:00.290261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.157 [2024-11-06 09:00:00.290278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.157 [2024-11-06 09:00:00.290293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.157 [2024-11-06 09:00:00.290309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.157 [2024-11-06 09:00:00.290324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.157 [2024-11-06 09:00:00.290342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.157 [2024-11-06 09:00:00.290357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.158 [2024-11-06 09:00:00.290374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.158 [2024-11-06 09:00:00.290389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.158 [2024-11-06 09:00:00.290405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.158 [2024-11-06 09:00:00.290420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.159 [2024-11-06 09:00:00.290437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.159 [2024-11-06 09:00:00.290452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.159 [2024-11-06 09:00:00.290468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.159 [2024-11-06 09:00:00.290482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.159 [2024-11-06 09:00:00.290499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.159 [2024-11-06 09:00:00.290518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.159 [2024-11-06 09:00:00.290534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(6) to be set 00:23:47.159 [2024-11-06 09:00:00.291768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.159 [2024-11-06 09:00:00.291792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.159 [2024-11-06 09:00:00.291813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.159 [2024-11-06 09:00:00.291838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.159 [2024-11-06 09:00:00.291857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.159 [2024-11-06 09:00:00.291873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.159 [2024-11-06 09:00:00.291889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.159 [2024-11-06 09:00:00.291905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.159 [2024-11-06 09:00:00.291922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.291936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.291953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.291968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.291985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.160 [2024-11-06 09:00:00.292350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.160 [2024-11-06 09:00:00.292365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.161 [2024-11-06 09:00:00.292888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.161 [2024-11-06 09:00:00.292904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.292922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.292937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.292954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.292969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.292986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.162 [2024-11-06 09:00:00.293515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.162 [2024-11-06 09:00:00.293529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.163 [2024-11-06 09:00:00.293866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.163 [2024-11-06 09:00:00.293883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.293899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.293914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7020 is same with the state(6) to be set 00:23:47.164 [2024-11-06 09:00:00.295193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.164 [2024-11-06 09:00:00.295944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.164 [2024-11-06 09:00:00.295962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.295977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.295994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.165 [2024-11-06 09:00:00.296979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.165 [2024-11-06 09:00:00.296997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.297316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.297331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbcd60 is same with the state(6) to be set 00:23:47.166 [2024-11-06 09:00:00.298558] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:47.166 [2024-11-06 09:00:00.298591] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:47.166 [2024-11-06 09:00:00.298612] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:47.166 [2024-11-06 09:00:00.298635] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:47.166 [2024-11-06 09:00:00.299039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.166 [2024-11-06 09:00:00.299070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b5270 with addr=10.0.0.2, port=4420 00:23:47.166 [2024-11-06 09:00:00.299088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5270 is same with the state(6) to be set 00:23:47.166 [2024-11-06 09:00:00.299190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.166 [2024-11-06 09:00:00.299215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b2ab0 with addr=10.0.0.2, port=4420 00:23:47.166 [2024-11-06 09:00:00.299232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b2ab0 is same with the state(6) to be set 00:23:47.166 [2024-11-06 09:00:00.299313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.166 [2024-11-06 09:00:00.299340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbe5ff0 with addr=10.0.0.2, port=4420 00:23:47.166 [2024-11-06 09:00:00.299356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe5ff0 is same with the state(6) to be set 00:23:47.166 [2024-11-06 09:00:00.299435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.166 [2024-11-06 09:00:00.299459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc28420 with addr=10.0.0.2, port=4420 00:23:47.166 [2024-11-06 09:00:00.299475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc28420 is same with the state(6) to be set 00:23:47.166 [2024-11-06 09:00:00.300371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.300970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.300987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.301001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.301019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.301034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.301050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.301065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.301081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.166 [2024-11-06 09:00:00.301096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.166 [2024-11-06 09:00:00.301113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.301976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.301993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.302026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.302057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.302089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.302123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.302160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.302202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.302233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.302265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.167 [2024-11-06 09:00:00.302296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.167 [2024-11-06 09:00:00.302311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.302328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.302342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.302359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.302373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.302390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.302405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.302421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.302436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.302452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.302467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.302485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.302500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.302517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.302531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.302551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb82f0 is same with the state(6) to be set 00:23:47.168 [2024-11-06 09:00:00.303818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.303855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.303889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.303905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.303922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.303937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.303953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.303968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.303985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.303999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.168 [2024-11-06 09:00:00.304673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.168 [2024-11-06 09:00:00.304689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.304706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.304720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.304737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.304752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.304769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.304783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.304800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.304815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.304840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.304858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.304881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.304896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.304913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.304928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.304945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.304960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.304976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.304991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.169 [2024-11-06 09:00:00.305938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.169 [2024-11-06 09:00:00.305953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbba50 is same with the state(6) to be set 00:23:47.169 [2024-11-06 09:00:00.308317] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:47.169 [2024-11-06 09:00:00.308351] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:47.169 [2024-11-06 09:00:00.308370] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:47.170 [2024-11-06 09:00:00.308387] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:47.170 [2024-11-06 09:00:00.308411] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:47.170 task offset: 32128 on job bdev=Nvme2n1 fails 00:23:47.170 00:23:47.170 Latency(us) 00:23:47.170 [2024-11-06T08:00:00.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.170 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme1n1 : 0.96 141.06 8.82 66.87 0.00 304594.81 9903.22 267192.70 00:23:47.170 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme2n1 ended in about 0.94 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme2n1 : 0.94 209.82 13.11 68.16 0.00 223229.18 11990.66 217482.43 00:23:47.170 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme3n1 ended in about 0.97 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme3n1 : 0.97 198.28 12.39 66.09 0.00 230352.97 28738.75 264085.81 00:23:47.170 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme4n1 ended in about 0.97 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme4n1 : 0.97 197.58 12.35 65.86 0.00 226620.49 18641.35 265639.25 00:23:47.170 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme5n1 ended in about 0.98 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme5n1 : 0.98 131.27 8.20 65.63 0.00 297254.56 22816.24 288940.94 00:23:47.170 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme6n1 ended in about 0.98 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme6n1 : 0.98 130.11 8.13 65.06 0.00 294048.55 22427.88 246997.90 00:23:47.170 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme7n1 ended in about 0.96 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme7n1 : 0.96 200.30 12.52 66.77 0.00 209723.73 18252.99 268746.15 00:23:47.170 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme8n1 ended in about 0.96 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme8n1 : 0.96 198.18 12.39 6.23 0.00 267986.43 21262.79 293601.28 00:23:47.170 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme9n1 ended in about 0.99 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme9n1 : 0.99 129.67 8.10 64.83 0.00 277391.23 21068.61 278066.82 00:23:47.170 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.170 Job: Nvme10n1 ended in about 0.98 seconds with error 00:23:47.170 Verification LBA range: start 0x0 length 0x400 00:23:47.170 Nvme10n1 : 0.98 130.81 8.18 65.41 0.00 268581.29 20777.34 306028.85 00:23:47.170 [2024-11-06T08:00:00.459Z] =================================================================================================================== 00:23:47.170 [2024-11-06T08:00:00.459Z] Total : 1667.08 104.19 600.91 0.00 255678.03 9903.22 306028.85 00:23:47.170 [2024-11-06 09:00:00.339202] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:47.170 [2024-11-06 09:00:00.339283] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:47.170 [2024-11-06 09:00:00.339409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b5270 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.339440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b2ab0 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.339460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe5ff0 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.339479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc28420 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.339846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.170 [2024-11-06 09:00:00.339890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b56f0 with addr=10.0.0.2, port=4420 00:23:47.170 [2024-11-06 09:00:00.339911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b56f0 is same with the state(6) to be set 00:23:47.170 [2024-11-06 09:00:00.340000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.170 [2024-11-06 09:00:00.340028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a96c0 with addr=10.0.0.2, port=4420 00:23:47.170 [2024-11-06 09:00:00.340045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a96c0 is same with the state(6) to be set 00:23:47.170 [2024-11-06 09:00:00.340149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.170 [2024-11-06 09:00:00.340182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbda0e0 with addr=10.0.0.2, port=4420 00:23:47.170 [2024-11-06 09:00:00.340199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda0e0 is same with the state(6) to be set 00:23:47.170 [2024-11-06 09:00:00.340290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.170 [2024-11-06 09:00:00.340316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc03910 with addr=10.0.0.2, port=4420 00:23:47.170 [2024-11-06 09:00:00.340332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc03910 is same with the state(6) to be set 00:23:47.170 [2024-11-06 09:00:00.340412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.170 [2024-11-06 09:00:00.340437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x71d110 with addr=10.0.0.2, port=4420 00:23:47.170 [2024-11-06 09:00:00.340454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71d110 is same with the state(6) to be set 00:23:47.170 [2024-11-06 09:00:00.340548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.170 [2024-11-06 09:00:00.340577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2e2d0 with addr=10.0.0.2, port=4420 00:23:47.170 [2024-11-06 09:00:00.340593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2e2d0 is same with the state(6) to be set 00:23:47.170 [2024-11-06 09:00:00.340610] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:47.170 [2024-11-06 09:00:00.340633] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:47.170 [2024-11-06 09:00:00.340650] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:47.170 [2024-11-06 09:00:00.340674] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:47.170 [2024-11-06 09:00:00.340690] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:47.170 [2024-11-06 09:00:00.340703] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:47.170 [2024-11-06 09:00:00.340722] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:47.170 [2024-11-06 09:00:00.340737] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:47.170 [2024-11-06 09:00:00.340752] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:47.170 [2024-11-06 09:00:00.340770] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:47.170 [2024-11-06 09:00:00.340785] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:47.170 [2024-11-06 09:00:00.340799] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:47.170 [2024-11-06 09:00:00.340846] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:47.170 [2024-11-06 09:00:00.340883] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:47.170 [2024-11-06 09:00:00.340905] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:47.170 [2024-11-06 09:00:00.340925] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:47.170 [2024-11-06 09:00:00.341583] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:47.170 [2024-11-06 09:00:00.341611] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:47.170 [2024-11-06 09:00:00.341625] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:47.170 [2024-11-06 09:00:00.341639] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:47.170 [2024-11-06 09:00:00.341657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b56f0 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.341678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a96c0 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.341697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbda0e0 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.341714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc03910 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.341732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d110 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.341749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2e2d0 (9): Bad file descriptor 00:23:47.170 [2024-11-06 09:00:00.342099] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:47.170 [2024-11-06 09:00:00.342134] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:47.171 [2024-11-06 09:00:00.342149] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:47.171 [2024-11-06 09:00:00.342174] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:47.171 [2024-11-06 09:00:00.342190] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:47.171 [2024-11-06 09:00:00.342204] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:47.171 [2024-11-06 09:00:00.342221] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:47.171 [2024-11-06 09:00:00.342235] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:47.171 [2024-11-06 09:00:00.342249] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:47.171 [2024-11-06 09:00:00.342265] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:47.171 [2024-11-06 09:00:00.342279] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:47.171 [2024-11-06 09:00:00.342293] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:47.171 [2024-11-06 09:00:00.342311] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:47.171 [2024-11-06 09:00:00.342325] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:47.171 [2024-11-06 09:00:00.342338] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:47.171 [2024-11-06 09:00:00.342355] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:47.171 [2024-11-06 09:00:00.342369] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:47.171 [2024-11-06 09:00:00.342383] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:47.171 [2024-11-06 09:00:00.342438] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:47.171 [2024-11-06 09:00:00.342460] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:47.171 [2024-11-06 09:00:00.342474] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:47.171 [2024-11-06 09:00:00.342488] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:47.171 [2024-11-06 09:00:00.342501] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:47.171 [2024-11-06 09:00:00.342514] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:47.736 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 875874 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 875874 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 875874 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.670 rmmod nvme_tcp 00:23:48.670 rmmod nvme_fabrics 00:23:48.670 rmmod nvme_keyring 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 875694 ']' 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 875694 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 875694 ']' 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 875694 00:23:48.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (875694) - No such process 00:23:48.670 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 875694 is not found' 00:23:48.670 Process with pid 875694 is not found 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.671 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.206 00:23:51.206 real 0m7.472s 00:23:51.206 user 0m18.434s 00:23:51.206 sys 0m1.472s 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.206 ************************************ 00:23:51.206 END TEST nvmf_shutdown_tc3 00:23:51.206 ************************************ 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:51.206 ************************************ 00:23:51.206 START TEST nvmf_shutdown_tc4 00:23:51.206 ************************************ 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:51.206 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:51.206 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.206 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:51.207 Found net devices under 0000:09:00.0: cvl_0_0 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:51.207 Found net devices under 0000:09:00.1: cvl_0_1 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.207 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:23:51.207 00:23:51.207 --- 10.0.0.2 ping statistics --- 00:23:51.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.207 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:23:51.207 00:23:51.207 --- 10.0.0.1 ping statistics --- 00:23:51.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.207 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=876901 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 876901 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 876901 ']' 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.207 [2024-11-06 09:00:04.191948] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:23:51.207 [2024-11-06 09:00:04.192045] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.207 [2024-11-06 09:00:04.267648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.207 [2024-11-06 09:00:04.323220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.207 [2024-11-06 09:00:04.323274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.207 [2024-11-06 09:00:04.323298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.207 [2024-11-06 09:00:04.323310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.207 [2024-11-06 09:00:04.323327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.207 [2024-11-06 09:00:04.324841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.207 [2024-11-06 09:00:04.324898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.207 [2024-11-06 09:00:04.324964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:51.207 [2024-11-06 09:00:04.324967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.207 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.208 [2024-11-06 09:00:04.461085] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.208 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.465 Malloc1 00:23:51.465 [2024-11-06 09:00:04.547979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.465 Malloc2 00:23:51.465 Malloc3 00:23:51.465 Malloc4 00:23:51.465 Malloc5 00:23:51.723 Malloc6 00:23:51.723 Malloc7 00:23:51.723 Malloc8 00:23:51.723 Malloc9 00:23:51.723 Malloc10 00:23:51.723 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.723 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:51.723 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.723 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.981 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=877075 00:23:51.981 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:51.981 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:51.981 [2024-11-06 09:00:05.083262] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 876901 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 876901 ']' 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 876901 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 876901 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 876901' 00:23:57.249 killing process with pid 876901 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 876901 00:23:57.249 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 876901 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 [2024-11-06 09:00:10.078664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 Write completed with error (sct=0, sc=8) 00:23:57.249 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 [2024-11-06 09:00:10.079855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 [2024-11-06 09:00:10.081030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.250 starting I/O failed: -6 00:23:57.250 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 [2024-11-06 09:00:10.082627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.251 NVMe io qpair process completion error 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 [2024-11-06 09:00:10.086846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 [2024-11-06 09:00:10.087927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 [2024-11-06 09:00:10.088142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b120 is same with the state(6) to be set 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 [2024-11-06 09:00:10.088195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b120 is same with the state(6) to be set 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 [2024-11-06 09:00:10.088212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b120 is same with the state(6) to be set 00:23:57.251 [2024-11-06 09:00:10.088226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b120 is same with the state(6) to be set 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 [2024-11-06 09:00:10.088239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b120 is same with the state(6) to be set 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 [2024-11-06 09:00:10.088576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b5f0 is same with tWrite completed with error (sct=0, sc=8) 00:23:57.251 he state(6) to be set 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 [2024-11-06 09:00:10.088612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b5f0 is same with the state(6) to be set 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 [2024-11-06 09:00:10.088631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b5f0 is same with the state(6) to be set 00:23:57.251 starting I/O failed: -6 00:23:57.251 [2024-11-06 09:00:10.088646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b5f0 is same with the state(6) to be set 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 [2024-11-06 09:00:10.088659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b5f0 is same with tstarting I/O failed: -6 00:23:57.251 he state(6) to be set 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 [2024-11-06 09:00:10.088675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79b5f0 is same with the state(6) to be set 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.251 starting I/O failed: -6 00:23:57.251 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 [2024-11-06 09:00:10.089364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with tstarting I/O failed: -6 00:23:57.252 he state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 [2024-11-06 09:00:10.089417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 [2024-11-06 09:00:10.089458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with tstarting I/O failed: -6 00:23:57.252 he state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 [2024-11-06 09:00:10.089473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 [2024-11-06 09:00:10.089487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 [2024-11-06 09:00:10.089527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79bac0 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ac50 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ac50 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.089962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ac50 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 [2024-11-06 09:00:10.089976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ac50 is same with tstarting I/O failed: -6 00:23:57.252 he state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 [2024-11-06 09:00:10.089991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ac50 is same with the state(6) to be set 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.090005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79ac50 is same with the state(6) to be set 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 [2024-11-06 09:00:10.090874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.252 NVMe io qpair process completion error 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.252 starting I/O failed: -6 00:23:57.252 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 [2024-11-06 09:00:10.092050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 [2024-11-06 09:00:10.093103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 [2024-11-06 09:00:10.094238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.253 starting I/O failed: -6 00:23:57.253 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 [2024-11-06 09:00:10.096496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.254 NVMe io qpair process completion error 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 [2024-11-06 09:00:10.097774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 Write completed with error (sct=0, sc=8) 00:23:57.254 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 [2024-11-06 09:00:10.098854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 [2024-11-06 09:00:10.099954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 [2024-11-06 09:00:10.101745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.255 NVMe io qpair process completion error 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.255 starting I/O failed: -6 00:23:57.255 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 [2024-11-06 09:00:10.103091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 [2024-11-06 09:00:10.104131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 [2024-11-06 09:00:10.105264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.256 Write completed with error (sct=0, sc=8) 00:23:57.256 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 [2024-11-06 09:00:10.107074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.257 NVMe io qpair process completion error 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 [2024-11-06 09:00:10.108390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 [2024-11-06 09:00:10.109465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.257 Write completed with error (sct=0, sc=8) 00:23:57.257 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 [2024-11-06 09:00:10.110641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 [2024-11-06 09:00:10.113767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.258 NVMe io qpair process completion error 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 starting I/O failed: -6 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.258 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 [2024-11-06 09:00:10.115125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.259 starting I/O failed: -6 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 [2024-11-06 09:00:10.116175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.259 starting I/O failed: -6 00:23:57.259 starting I/O failed: -6 00:23:57.259 starting I/O failed: -6 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 [2024-11-06 09:00:10.117573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.259 starting I/O failed: -6 00:23:57.259 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 [2024-11-06 09:00:10.121131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.260 NVMe io qpair process completion error 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 [2024-11-06 09:00:10.122497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.260 starting I/O failed: -6 00:23:57.260 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 [2024-11-06 09:00:10.123505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 [2024-11-06 09:00:10.124725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.261 starting I/O failed: -6 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 [2024-11-06 09:00:10.126752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.261 NVMe io qpair process completion error 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 Write completed with error (sct=0, sc=8) 00:23:57.261 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 [2024-11-06 09:00:10.127971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 [2024-11-06 09:00:10.129085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 [2024-11-06 09:00:10.130272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.262 starting I/O failed: -6 00:23:57.262 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 [2024-11-06 09:00:10.132767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.263 NVMe io qpair process completion error 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 [2024-11-06 09:00:10.134174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 starting I/O failed: -6 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.263 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 [2024-11-06 09:00:10.135138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 [2024-11-06 09:00:10.136323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.264 Write completed with error (sct=0, sc=8) 00:23:57.264 starting I/O failed: -6 00:23:57.265 [2024-11-06 09:00:10.140478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:57.265 NVMe io qpair process completion error 00:23:57.265 Initializing NVMe Controllers 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:57.265 Controller IO queue size 128, less than required. 00:23:57.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:57.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:57.265 Initialization complete. Launching workers. 00:23:57.265 ======================================================== 00:23:57.265 Latency(us) 00:23:57.265 Device Information : IOPS MiB/s Average min max 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1880.68 80.81 68079.92 818.50 123955.36 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1812.19 77.87 70675.06 1094.99 124273.98 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1804.14 77.52 71017.06 902.09 125802.76 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1846.11 79.32 69450.22 825.01 120142.32 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1862.41 80.03 68888.41 1044.18 133861.02 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1855.46 79.73 69175.52 855.02 116693.16 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1780.23 76.49 71265.38 1226.97 115141.99 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1813.06 77.90 70764.08 967.37 137413.10 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1819.80 78.19 69720.24 969.16 115343.00 00:23:57.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1850.24 79.50 68593.48 860.17 113409.06 00:23:57.265 ======================================================== 00:23:57.265 Total : 18324.31 787.37 69746.71 818.50 137413.10 00:23:57.265 00:23:57.265 [2024-11-06 09:00:10.146746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff46b0 is same with the state(6) to be set 00:23:57.265 [2024-11-06 09:00:10.146847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6900 is same with the state(6) to be set 00:23:57.265 [2024-11-06 09:00:10.146919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff55f0 is same with the state(6) to be set 00:23:57.265 [2024-11-06 09:00:10.146978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5c50 is same with the state(6) to be set 00:23:57.265 [2024-11-06 09:00:10.147033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff5920 is same with the state(6) to be set 00:23:57.265 [2024-11-06 09:00:10.147089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff4d10 is same with the state(6) to be set 00:23:57.265 [2024-11-06 09:00:10.147153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6720 is same with the state(6) to be set 00:23:57.265 [2024-11-06 09:00:10.147214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff52c0 is same with the state(6) to be set 00:23:57.265 [2024-11-06 09:00:10.147270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6ae0 is same with the state(6) to be set 00:23:57.265 [2024-11-06 09:00:10.147326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff49e0 is same with the state(6) to be set 00:23:57.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:57.525 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 877075 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 877075 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 877075 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:58.462 rmmod nvme_tcp 00:23:58.462 rmmod nvme_fabrics 00:23:58.462 rmmod nvme_keyring 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 876901 ']' 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 876901 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 876901 ']' 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 876901 00:23:58.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (876901) - No such process 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 876901 is not found' 00:23:58.462 Process with pid 876901 is not found 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.462 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.996 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:00.996 00:24:00.996 real 0m9.748s 00:24:00.996 user 0m23.552s 00:24:00.996 sys 0m5.673s 00:24:00.996 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:00.996 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:00.996 ************************************ 00:24:00.996 END TEST nvmf_shutdown_tc4 00:24:00.996 ************************************ 00:24:00.996 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:00.996 00:24:00.996 real 0m37.467s 00:24:00.996 user 1m41.341s 00:24:00.996 sys 0m12.135s 00:24:00.996 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:00.996 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:00.996 ************************************ 00:24:00.996 END TEST nvmf_shutdown 00:24:00.996 ************************************ 00:24:00.996 09:00:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:24:00.996 00:24:00.996 real 11m34.783s 00:24:00.996 user 27m40.933s 00:24:00.996 sys 2m43.891s 00:24:00.996 09:00:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:00.996 09:00:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:00.996 ************************************ 00:24:00.996 END TEST nvmf_target_extra 00:24:00.996 ************************************ 00:24:00.996 09:00:13 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:00.997 09:00:13 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:00.997 09:00:13 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:00.997 09:00:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:00.997 ************************************ 00:24:00.997 START TEST nvmf_host 00:24:00.997 ************************************ 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:00.997 * Looking for test storage... 00:24:00.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1689 -- # lcov --version 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:00.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.997 --rc genhtml_branch_coverage=1 00:24:00.997 --rc genhtml_function_coverage=1 00:24:00.997 --rc genhtml_legend=1 00:24:00.997 --rc geninfo_all_blocks=1 00:24:00.997 --rc geninfo_unexecuted_blocks=1 00:24:00.997 00:24:00.997 ' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:00.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.997 --rc genhtml_branch_coverage=1 00:24:00.997 --rc genhtml_function_coverage=1 00:24:00.997 --rc genhtml_legend=1 00:24:00.997 --rc geninfo_all_blocks=1 00:24:00.997 --rc geninfo_unexecuted_blocks=1 00:24:00.997 00:24:00.997 ' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:00.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.997 --rc genhtml_branch_coverage=1 00:24:00.997 --rc genhtml_function_coverage=1 00:24:00.997 --rc genhtml_legend=1 00:24:00.997 --rc geninfo_all_blocks=1 00:24:00.997 --rc geninfo_unexecuted_blocks=1 00:24:00.997 00:24:00.997 ' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:00.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.997 --rc genhtml_branch_coverage=1 00:24:00.997 --rc genhtml_function_coverage=1 00:24:00.997 --rc genhtml_legend=1 00:24:00.997 --rc geninfo_all_blocks=1 00:24:00.997 --rc geninfo_unexecuted_blocks=1 00:24:00.997 00:24:00.997 ' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.997 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:00.997 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.997 ************************************ 00:24:00.997 START TEST nvmf_multicontroller 00:24:00.997 ************************************ 00:24:00.997 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:00.997 * Looking for test storage... 00:24:00.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.997 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:00.997 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # lcov --version 00:24:00.997 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:00.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.998 --rc genhtml_branch_coverage=1 00:24:00.998 --rc genhtml_function_coverage=1 00:24:00.998 --rc genhtml_legend=1 00:24:00.998 --rc geninfo_all_blocks=1 00:24:00.998 --rc geninfo_unexecuted_blocks=1 00:24:00.998 00:24:00.998 ' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:00.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.998 --rc genhtml_branch_coverage=1 00:24:00.998 --rc genhtml_function_coverage=1 00:24:00.998 --rc genhtml_legend=1 00:24:00.998 --rc geninfo_all_blocks=1 00:24:00.998 --rc geninfo_unexecuted_blocks=1 00:24:00.998 00:24:00.998 ' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:00.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.998 --rc genhtml_branch_coverage=1 00:24:00.998 --rc genhtml_function_coverage=1 00:24:00.998 --rc genhtml_legend=1 00:24:00.998 --rc geninfo_all_blocks=1 00:24:00.998 --rc geninfo_unexecuted_blocks=1 00:24:00.998 00:24:00.998 ' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:00.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.998 --rc genhtml_branch_coverage=1 00:24:00.998 --rc genhtml_function_coverage=1 00:24:00.998 --rc genhtml_legend=1 00:24:00.998 --rc geninfo_all_blocks=1 00:24:00.998 --rc geninfo_unexecuted_blocks=1 00:24:00.998 00:24:00.998 ' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:00.998 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.999 09:00:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:03.530 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:03.530 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:03.530 Found net devices under 0000:09:00.0: cvl_0_0 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:03.530 Found net devices under 0000:09:00.1: cvl_0_1 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:03.530 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:03.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:24:03.531 00:24:03.531 --- 10.0.0.2 ping statistics --- 00:24:03.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.531 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:03.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:24:03.531 00:24:03.531 --- 10.0.0.1 ping statistics --- 00:24:03.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.531 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=880384 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 880384 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 880384 ']' 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.531 [2024-11-06 09:00:16.522488] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:24:03.531 [2024-11-06 09:00:16.522561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:03.531 [2024-11-06 09:00:16.592636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:03.531 [2024-11-06 09:00:16.648613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:03.531 [2024-11-06 09:00:16.648663] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:03.531 [2024-11-06 09:00:16.648676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:03.531 [2024-11-06 09:00:16.648688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:03.531 [2024-11-06 09:00:16.648698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:03.531 [2024-11-06 09:00:16.650170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.531 [2024-11-06 09:00:16.650233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:03.531 [2024-11-06 09:00:16.650238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:03.531 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.789 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:03.789 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:03.789 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 [2024-11-06 09:00:16.829564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 Malloc0 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 [2024-11-06 09:00:16.898627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 [2024-11-06 09:00:16.906509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 Malloc1 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=880408 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 880408 /var/tmp/bdevperf.sock 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 880408 ']' 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:03.790 09:00:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.048 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:04.048 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:04.048 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:04.048 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.048 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.306 NVMe0n1 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.306 1 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.306 request: 00:24:04.306 { 00:24:04.306 "name": "NVMe0", 00:24:04.306 "trtype": "tcp", 00:24:04.306 "traddr": "10.0.0.2", 00:24:04.306 "adrfam": "ipv4", 00:24:04.306 "trsvcid": "4420", 00:24:04.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.306 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:04.306 "hostaddr": "10.0.0.1", 00:24:04.306 "prchk_reftag": false, 00:24:04.306 "prchk_guard": false, 00:24:04.306 "hdgst": false, 00:24:04.306 "ddgst": false, 00:24:04.306 "allow_unrecognized_csi": false, 00:24:04.306 "method": "bdev_nvme_attach_controller", 00:24:04.306 "req_id": 1 00:24:04.306 } 00:24:04.306 Got JSON-RPC error response 00:24:04.306 response: 00:24:04.306 { 00:24:04.306 "code": -114, 00:24:04.306 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:04.306 } 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.306 request: 00:24:04.306 { 00:24:04.306 "name": "NVMe0", 00:24:04.306 "trtype": "tcp", 00:24:04.306 "traddr": "10.0.0.2", 00:24:04.306 "adrfam": "ipv4", 00:24:04.306 "trsvcid": "4420", 00:24:04.306 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:04.306 "hostaddr": "10.0.0.1", 00:24:04.306 "prchk_reftag": false, 00:24:04.306 "prchk_guard": false, 00:24:04.306 "hdgst": false, 00:24:04.306 "ddgst": false, 00:24:04.306 "allow_unrecognized_csi": false, 00:24:04.306 "method": "bdev_nvme_attach_controller", 00:24:04.306 "req_id": 1 00:24:04.306 } 00:24:04.306 Got JSON-RPC error response 00:24:04.306 response: 00:24:04.306 { 00:24:04.306 "code": -114, 00:24:04.306 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:04.306 } 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.306 request: 00:24:04.306 { 00:24:04.306 "name": "NVMe0", 00:24:04.306 "trtype": "tcp", 00:24:04.306 "traddr": "10.0.0.2", 00:24:04.306 "adrfam": "ipv4", 00:24:04.306 "trsvcid": "4420", 00:24:04.306 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.306 "hostaddr": "10.0.0.1", 00:24:04.306 "prchk_reftag": false, 00:24:04.306 "prchk_guard": false, 00:24:04.306 "hdgst": false, 00:24:04.306 "ddgst": false, 00:24:04.306 "multipath": "disable", 00:24:04.306 "allow_unrecognized_csi": false, 00:24:04.306 "method": "bdev_nvme_attach_controller", 00:24:04.306 "req_id": 1 00:24:04.306 } 00:24:04.306 Got JSON-RPC error response 00:24:04.306 response: 00:24:04.306 { 00:24:04.306 "code": -114, 00:24:04.306 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:04.306 } 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:04.306 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.307 request: 00:24:04.307 { 00:24:04.307 "name": "NVMe0", 00:24:04.307 "trtype": "tcp", 00:24:04.307 "traddr": "10.0.0.2", 00:24:04.307 "adrfam": "ipv4", 00:24:04.307 "trsvcid": "4420", 00:24:04.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.307 "hostaddr": "10.0.0.1", 00:24:04.307 "prchk_reftag": false, 00:24:04.307 "prchk_guard": false, 00:24:04.307 "hdgst": false, 00:24:04.307 "ddgst": false, 00:24:04.307 "multipath": "failover", 00:24:04.307 "allow_unrecognized_csi": false, 00:24:04.307 "method": "bdev_nvme_attach_controller", 00:24:04.307 "req_id": 1 00:24:04.307 } 00:24:04.307 Got JSON-RPC error response 00:24:04.307 response: 00:24:04.307 { 00:24:04.307 "code": -114, 00:24:04.307 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:04.307 } 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.307 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.565 NVMe0n1 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.565 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:04.565 09:00:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.938 { 00:24:05.938 "results": [ 00:24:05.938 { 00:24:05.938 "job": "NVMe0n1", 00:24:05.938 "core_mask": "0x1", 00:24:05.938 "workload": "write", 00:24:05.938 "status": "finished", 00:24:05.938 "queue_depth": 128, 00:24:05.938 "io_size": 4096, 00:24:05.938 "runtime": 1.008827, 00:24:05.938 "iops": 18572.06438765021, 00:24:05.938 "mibps": 72.54712651425864, 00:24:05.938 "io_failed": 0, 00:24:05.938 "io_timeout": 0, 00:24:05.938 "avg_latency_us": 6875.834990037005, 00:24:05.938 "min_latency_us": 4150.613333333334, 00:24:05.938 "max_latency_us": 17670.447407407406 00:24:05.938 } 00:24:05.938 ], 00:24:05.938 "core_count": 1 00:24:05.938 } 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 880408 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 880408 ']' 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 880408 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 880408 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 880408' 00:24:05.938 killing process with pid 880408 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 880408 00:24:05.938 09:00:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 880408 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:05.938 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1595 -- # read -r file 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1594 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1594 -- # sort -u 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # cat 00:24:06.196 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:06.196 [2024-11-06 09:00:17.013433] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:24:06.196 [2024-11-06 09:00:17.013542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880408 ] 00:24:06.196 [2024-11-06 09:00:17.086308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.196 [2024-11-06 09:00:17.145742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.196 [2024-11-06 09:00:17.750450] bdev.c:4897:bdev_name_add: *ERROR*: Bdev name e86870b4-7a1d-4167-9cd2-e8fa977d7fcb already exists 00:24:06.196 [2024-11-06 09:00:17.750505] bdev.c:8100:bdev_register: *ERROR*: Unable to add uuid:e86870b4-7a1d-4167-9cd2-e8fa977d7fcb alias for bdev NVMe1n1 00:24:06.196 [2024-11-06 09:00:17.750521] bdev_nvme.c:4604:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:06.196 Running I/O for 1 seconds... 00:24:06.196 18515.00 IOPS, 72.32 MiB/s 00:24:06.196 Latency(us) 00:24:06.196 [2024-11-06T08:00:19.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.196 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:06.196 NVMe0n1 : 1.01 18572.06 72.55 0.00 0.00 6875.83 4150.61 17670.45 00:24:06.196 [2024-11-06T08:00:19.485Z] =================================================================================================================== 00:24:06.196 [2024-11-06T08:00:19.485Z] Total : 18572.06 72.55 0.00 0.00 6875.83 4150.61 17670.45 00:24:06.196 Received shutdown signal, test time was about 1.000000 seconds 00:24:06.196 00:24:06.196 Latency(us) 00:24:06.196 [2024-11-06T08:00:19.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.196 [2024-11-06T08:00:19.485Z] =================================================================================================================== 00:24:06.196 [2024-11-06T08:00:19.485Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.196 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1601 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1595 -- # read -r file 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.196 rmmod nvme_tcp 00:24:06.196 rmmod nvme_fabrics 00:24:06.196 rmmod nvme_keyring 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 880384 ']' 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 880384 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 880384 ']' 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 880384 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 880384 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 880384' 00:24:06.196 killing process with pid 880384 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 880384 00:24:06.196 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 880384 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.458 09:00:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.989 00:24:08.989 real 0m7.645s 00:24:08.989 user 0m12.005s 00:24:08.989 sys 0m2.434s 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.989 ************************************ 00:24:08.989 END TEST nvmf_multicontroller 00:24:08.989 ************************************ 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.989 ************************************ 00:24:08.989 START TEST nvmf_aer 00:24:08.989 ************************************ 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:08.989 * Looking for test storage... 00:24:08.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # lcov --version 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:08.989 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:08.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.990 --rc genhtml_branch_coverage=1 00:24:08.990 --rc genhtml_function_coverage=1 00:24:08.990 --rc genhtml_legend=1 00:24:08.990 --rc geninfo_all_blocks=1 00:24:08.990 --rc geninfo_unexecuted_blocks=1 00:24:08.990 00:24:08.990 ' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:08.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.990 --rc genhtml_branch_coverage=1 00:24:08.990 --rc genhtml_function_coverage=1 00:24:08.990 --rc genhtml_legend=1 00:24:08.990 --rc geninfo_all_blocks=1 00:24:08.990 --rc geninfo_unexecuted_blocks=1 00:24:08.990 00:24:08.990 ' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:08.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.990 --rc genhtml_branch_coverage=1 00:24:08.990 --rc genhtml_function_coverage=1 00:24:08.990 --rc genhtml_legend=1 00:24:08.990 --rc geninfo_all_blocks=1 00:24:08.990 --rc geninfo_unexecuted_blocks=1 00:24:08.990 00:24:08.990 ' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:08.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:08.990 --rc genhtml_branch_coverage=1 00:24:08.990 --rc genhtml_function_coverage=1 00:24:08.990 --rc genhtml_legend=1 00:24:08.990 --rc geninfo_all_blocks=1 00:24:08.990 --rc geninfo_unexecuted_blocks=1 00:24:08.990 00:24:08.990 ' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:08.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:08.990 09:00:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.109 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:11.110 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:11.110 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:11.110 Found net devices under 0000:09:00.0: cvl_0_0 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:11.110 Found net devices under 0000:09:00.1: cvl_0_1 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.110 09:00:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:11.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:24:11.110 00:24:11.110 --- 10.0.0.2 ping statistics --- 00:24:11.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.110 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:24:11.110 00:24:11.110 --- 10.0.0.1 ping statistics --- 00:24:11.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.110 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=882742 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 882742 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 882742 ']' 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:11.110 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.110 [2024-11-06 09:00:24.149443] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:24:11.110 [2024-11-06 09:00:24.149525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.110 [2024-11-06 09:00:24.221095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.110 [2024-11-06 09:00:24.279211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.110 [2024-11-06 09:00:24.279267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.110 [2024-11-06 09:00:24.279279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.110 [2024-11-06 09:00:24.279290] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.110 [2024-11-06 09:00:24.279299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.110 [2024-11-06 09:00:24.280827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.110 [2024-11-06 09:00:24.280891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.110 [2024-11-06 09:00:24.280956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.110 [2024-11-06 09:00:24.280959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.369 [2024-11-06 09:00:24.439549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.369 Malloc0 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.369 [2024-11-06 09:00:24.513363] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.369 [ 00:24:11.369 { 00:24:11.369 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:11.369 "subtype": "Discovery", 00:24:11.369 "listen_addresses": [], 00:24:11.369 "allow_any_host": true, 00:24:11.369 "hosts": [] 00:24:11.369 }, 00:24:11.369 { 00:24:11.369 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.369 "subtype": "NVMe", 00:24:11.369 "listen_addresses": [ 00:24:11.369 { 00:24:11.369 "trtype": "TCP", 00:24:11.369 "adrfam": "IPv4", 00:24:11.369 "traddr": "10.0.0.2", 00:24:11.369 "trsvcid": "4420" 00:24:11.369 } 00:24:11.369 ], 00:24:11.369 "allow_any_host": true, 00:24:11.369 "hosts": [], 00:24:11.369 "serial_number": "SPDK00000000000001", 00:24:11.369 "model_number": "SPDK bdev Controller", 00:24:11.369 "max_namespaces": 2, 00:24:11.369 "min_cntlid": 1, 00:24:11.369 "max_cntlid": 65519, 00:24:11.369 "namespaces": [ 00:24:11.369 { 00:24:11.369 "nsid": 1, 00:24:11.369 "bdev_name": "Malloc0", 00:24:11.369 "name": "Malloc0", 00:24:11.369 "nguid": "3DEBA643F57349888FD88184751B54A8", 00:24:11.369 "uuid": "3deba643-f573-4988-8fd8-8184751b54a8" 00:24:11.369 } 00:24:11.369 ] 00:24:11.369 } 00:24:11.369 ] 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=882772 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:11.369 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.627 Malloc1 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.627 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.627 [ 00:24:11.627 { 00:24:11.627 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:11.627 "subtype": "Discovery", 00:24:11.627 "listen_addresses": [], 00:24:11.627 "allow_any_host": true, 00:24:11.627 "hosts": [] 00:24:11.627 }, 00:24:11.627 { 00:24:11.627 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.627 "subtype": "NVMe", 00:24:11.627 "listen_addresses": [ 00:24:11.627 { 00:24:11.627 "trtype": "TCP", 00:24:11.627 "adrfam": "IPv4", 00:24:11.627 "traddr": "10.0.0.2", 00:24:11.627 "trsvcid": "4420" 00:24:11.627 } 00:24:11.627 ], 00:24:11.627 "allow_any_host": true, 00:24:11.627 "hosts": [], 00:24:11.627 "serial_number": "SPDK00000000000001", 00:24:11.627 "model_number": "SPDK bdev Controller", 00:24:11.627 "max_namespaces": 2, 00:24:11.627 "min_cntlid": 1, 00:24:11.627 "max_cntlid": 65519, 00:24:11.627 "namespaces": [ 00:24:11.627 { 00:24:11.627 "nsid": 1, 00:24:11.627 "bdev_name": "Malloc0", 00:24:11.627 "name": "Malloc0", 00:24:11.627 "nguid": "3DEBA643F57349888FD88184751B54A8", 00:24:11.627 "uuid": "3deba643-f573-4988-8fd8-8184751b54a8" 00:24:11.627 }, 00:24:11.627 { 00:24:11.627 "nsid": 2, 00:24:11.884 "bdev_name": "Malloc1", 00:24:11.884 "name": "Malloc1", 00:24:11.884 "nguid": "2FDE323F8D854323AB2933AFC800C9FB", 00:24:11.884 "uuid": "2fde323f-8d85-4323-ab29-33afc800c9fb" 00:24:11.884 } 00:24:11.884 ] 00:24:11.884 } 00:24:11.884 ] 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 882772 00:24:11.884 Asynchronous Event Request test 00:24:11.884 Attaching to 10.0.0.2 00:24:11.884 Attached to 10.0.0.2 00:24:11.884 Registering asynchronous event callbacks... 00:24:11.884 Starting namespace attribute notice tests for all controllers... 00:24:11.884 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:11.884 aer_cb - Changed Namespace 00:24:11.884 Cleaning up... 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.884 09:00:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.884 rmmod nvme_tcp 00:24:11.884 rmmod nvme_fabrics 00:24:11.884 rmmod nvme_keyring 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 882742 ']' 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 882742 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 882742 ']' 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 882742 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 882742 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 882742' 00:24:11.884 killing process with pid 882742 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 882742 00:24:11.884 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 882742 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.143 09:00:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.047 09:00:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:14.047 00:24:14.047 real 0m5.623s 00:24:14.047 user 0m4.766s 00:24:14.047 sys 0m2.028s 00:24:14.047 09:00:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:14.047 09:00:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:14.047 ************************************ 00:24:14.047 END TEST nvmf_aer 00:24:14.047 ************************************ 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.305 ************************************ 00:24:14.305 START TEST nvmf_async_init 00:24:14.305 ************************************ 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:14.305 * Looking for test storage... 00:24:14.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # lcov --version 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.305 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:14.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.306 --rc genhtml_branch_coverage=1 00:24:14.306 --rc genhtml_function_coverage=1 00:24:14.306 --rc genhtml_legend=1 00:24:14.306 --rc geninfo_all_blocks=1 00:24:14.306 --rc geninfo_unexecuted_blocks=1 00:24:14.306 00:24:14.306 ' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:14.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.306 --rc genhtml_branch_coverage=1 00:24:14.306 --rc genhtml_function_coverage=1 00:24:14.306 --rc genhtml_legend=1 00:24:14.306 --rc geninfo_all_blocks=1 00:24:14.306 --rc geninfo_unexecuted_blocks=1 00:24:14.306 00:24:14.306 ' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:14.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.306 --rc genhtml_branch_coverage=1 00:24:14.306 --rc genhtml_function_coverage=1 00:24:14.306 --rc genhtml_legend=1 00:24:14.306 --rc geninfo_all_blocks=1 00:24:14.306 --rc geninfo_unexecuted_blocks=1 00:24:14.306 00:24:14.306 ' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:14.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.306 --rc genhtml_branch_coverage=1 00:24:14.306 --rc genhtml_function_coverage=1 00:24:14.306 --rc genhtml_legend=1 00:24:14.306 --rc geninfo_all_blocks=1 00:24:14.306 --rc geninfo_unexecuted_blocks=1 00:24:14.306 00:24:14.306 ' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:14.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=007a6fe57b654883acf88e01da88f6ea 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:14.306 09:00:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:16.839 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:16.839 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:16.839 Found net devices under 0000:09:00.0: cvl_0_0 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.839 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:16.839 Found net devices under 0000:09:00.1: cvl_0_1 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:16.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:24:16.840 00:24:16.840 --- 10.0.0.2 ping statistics --- 00:24:16.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.840 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:24:16.840 00:24:16.840 --- 10.0.0.1 ping statistics --- 00:24:16.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.840 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=884837 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 884837 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 884837 ']' 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.840 09:00:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.840 [2024-11-06 09:00:29.895052] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:24:16.840 [2024-11-06 09:00:29.895150] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.840 [2024-11-06 09:00:29.966678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.840 [2024-11-06 09:00:30.026961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.840 [2024-11-06 09:00:30.027025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.840 [2024-11-06 09:00:30.027046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.840 [2024-11-06 09:00:30.027067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.840 [2024-11-06 09:00:30.027077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.840 [2024-11-06 09:00:30.027687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.098 [2024-11-06 09:00:30.164685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.098 null0 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 007a6fe57b654883acf88e01da88f6ea 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.098 [2024-11-06 09:00:30.204945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.098 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.356 nvme0n1 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.356 [ 00:24:17.356 { 00:24:17.356 "name": "nvme0n1", 00:24:17.356 "aliases": [ 00:24:17.356 "007a6fe5-7b65-4883-acf8-8e01da88f6ea" 00:24:17.356 ], 00:24:17.356 "product_name": "NVMe disk", 00:24:17.356 "block_size": 512, 00:24:17.356 "num_blocks": 2097152, 00:24:17.356 "uuid": "007a6fe5-7b65-4883-acf8-8e01da88f6ea", 00:24:17.356 "numa_id": 0, 00:24:17.356 "assigned_rate_limits": { 00:24:17.356 "rw_ios_per_sec": 0, 00:24:17.356 "rw_mbytes_per_sec": 0, 00:24:17.356 "r_mbytes_per_sec": 0, 00:24:17.356 "w_mbytes_per_sec": 0 00:24:17.356 }, 00:24:17.356 "claimed": false, 00:24:17.356 "zoned": false, 00:24:17.356 "supported_io_types": { 00:24:17.356 "read": true, 00:24:17.356 "write": true, 00:24:17.356 "unmap": false, 00:24:17.356 "flush": true, 00:24:17.356 "reset": true, 00:24:17.356 "nvme_admin": true, 00:24:17.356 "nvme_io": true, 00:24:17.356 "nvme_io_md": false, 00:24:17.356 "write_zeroes": true, 00:24:17.356 "zcopy": false, 00:24:17.356 "get_zone_info": false, 00:24:17.356 "zone_management": false, 00:24:17.356 "zone_append": false, 00:24:17.356 "compare": true, 00:24:17.356 "compare_and_write": true, 00:24:17.356 "abort": true, 00:24:17.356 "seek_hole": false, 00:24:17.356 "seek_data": false, 00:24:17.356 "copy": true, 00:24:17.356 "nvme_iov_md": false 00:24:17.356 }, 00:24:17.356 "memory_domains": [ 00:24:17.356 { 00:24:17.356 "dma_device_id": "system", 00:24:17.356 "dma_device_type": 1 00:24:17.356 } 00:24:17.356 ], 00:24:17.356 "driver_specific": { 00:24:17.356 "nvme": [ 00:24:17.356 { 00:24:17.356 "trid": { 00:24:17.356 "trtype": "TCP", 00:24:17.356 "adrfam": "IPv4", 00:24:17.356 "traddr": "10.0.0.2", 00:24:17.356 "trsvcid": "4420", 00:24:17.356 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:17.356 }, 00:24:17.356 "ctrlr_data": { 00:24:17.356 "cntlid": 1, 00:24:17.356 "vendor_id": "0x8086", 00:24:17.356 "model_number": "SPDK bdev Controller", 00:24:17.356 "serial_number": "00000000000000000000", 00:24:17.356 "firmware_revision": "25.01", 00:24:17.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.356 "oacs": { 00:24:17.356 "security": 0, 00:24:17.356 "format": 0, 00:24:17.356 "firmware": 0, 00:24:17.356 "ns_manage": 0 00:24:17.356 }, 00:24:17.356 "multi_ctrlr": true, 00:24:17.356 "ana_reporting": false 00:24:17.356 }, 00:24:17.356 "vs": { 00:24:17.356 "nvme_version": "1.3" 00:24:17.356 }, 00:24:17.356 "ns_data": { 00:24:17.356 "id": 1, 00:24:17.356 "can_share": true 00:24:17.356 } 00:24:17.356 } 00:24:17.356 ], 00:24:17.356 "mp_policy": "active_passive" 00:24:17.356 } 00:24:17.356 } 00:24:17.356 ] 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.356 [2024-11-06 09:00:30.453941] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:17.356 [2024-11-06 09:00:30.454029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d5b20 (9): Bad file descriptor 00:24:17.356 [2024-11-06 09:00:30.585959] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.356 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.356 [ 00:24:17.356 { 00:24:17.356 "name": "nvme0n1", 00:24:17.356 "aliases": [ 00:24:17.356 "007a6fe5-7b65-4883-acf8-8e01da88f6ea" 00:24:17.356 ], 00:24:17.356 "product_name": "NVMe disk", 00:24:17.356 "block_size": 512, 00:24:17.357 "num_blocks": 2097152, 00:24:17.357 "uuid": "007a6fe5-7b65-4883-acf8-8e01da88f6ea", 00:24:17.357 "numa_id": 0, 00:24:17.357 "assigned_rate_limits": { 00:24:17.357 "rw_ios_per_sec": 0, 00:24:17.357 "rw_mbytes_per_sec": 0, 00:24:17.357 "r_mbytes_per_sec": 0, 00:24:17.357 "w_mbytes_per_sec": 0 00:24:17.357 }, 00:24:17.357 "claimed": false, 00:24:17.357 "zoned": false, 00:24:17.357 "supported_io_types": { 00:24:17.357 "read": true, 00:24:17.357 "write": true, 00:24:17.357 "unmap": false, 00:24:17.357 "flush": true, 00:24:17.357 "reset": true, 00:24:17.357 "nvme_admin": true, 00:24:17.357 "nvme_io": true, 00:24:17.357 "nvme_io_md": false, 00:24:17.357 "write_zeroes": true, 00:24:17.357 "zcopy": false, 00:24:17.357 "get_zone_info": false, 00:24:17.357 "zone_management": false, 00:24:17.357 "zone_append": false, 00:24:17.357 "compare": true, 00:24:17.357 "compare_and_write": true, 00:24:17.357 "abort": true, 00:24:17.357 "seek_hole": false, 00:24:17.357 "seek_data": false, 00:24:17.357 "copy": true, 00:24:17.357 "nvme_iov_md": false 00:24:17.357 }, 00:24:17.357 "memory_domains": [ 00:24:17.357 { 00:24:17.357 "dma_device_id": "system", 00:24:17.357 "dma_device_type": 1 00:24:17.357 } 00:24:17.357 ], 00:24:17.357 "driver_specific": { 00:24:17.357 "nvme": [ 00:24:17.357 { 00:24:17.357 "trid": { 00:24:17.357 "trtype": "TCP", 00:24:17.357 "adrfam": "IPv4", 00:24:17.357 "traddr": "10.0.0.2", 00:24:17.357 "trsvcid": "4420", 00:24:17.357 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:17.357 }, 00:24:17.357 "ctrlr_data": { 00:24:17.357 "cntlid": 2, 00:24:17.357 "vendor_id": "0x8086", 00:24:17.357 "model_number": "SPDK bdev Controller", 00:24:17.357 "serial_number": "00000000000000000000", 00:24:17.357 "firmware_revision": "25.01", 00:24:17.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.357 "oacs": { 00:24:17.357 "security": 0, 00:24:17.357 "format": 0, 00:24:17.357 "firmware": 0, 00:24:17.357 "ns_manage": 0 00:24:17.357 }, 00:24:17.357 "multi_ctrlr": true, 00:24:17.357 "ana_reporting": false 00:24:17.357 }, 00:24:17.357 "vs": { 00:24:17.357 "nvme_version": "1.3" 00:24:17.357 }, 00:24:17.357 "ns_data": { 00:24:17.357 "id": 1, 00:24:17.357 "can_share": true 00:24:17.357 } 00:24:17.357 } 00:24:17.357 ], 00:24:17.357 "mp_policy": "active_passive" 00:24:17.357 } 00:24:17.357 } 00:24:17.357 ] 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GnQmnCCtUS 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GnQmnCCtUS 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.GnQmnCCtUS 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.357 [2024-11-06 09:00:30.638563] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:17.357 [2024-11-06 09:00:30.638668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.357 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.615 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.615 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:17.615 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.615 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.615 [2024-11-06 09:00:30.654612] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.615 nvme0n1 00:24:17.615 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.615 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:17.615 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.615 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.615 [ 00:24:17.615 { 00:24:17.615 "name": "nvme0n1", 00:24:17.615 "aliases": [ 00:24:17.615 "007a6fe5-7b65-4883-acf8-8e01da88f6ea" 00:24:17.615 ], 00:24:17.615 "product_name": "NVMe disk", 00:24:17.615 "block_size": 512, 00:24:17.615 "num_blocks": 2097152, 00:24:17.615 "uuid": "007a6fe5-7b65-4883-acf8-8e01da88f6ea", 00:24:17.615 "numa_id": 0, 00:24:17.615 "assigned_rate_limits": { 00:24:17.615 "rw_ios_per_sec": 0, 00:24:17.615 "rw_mbytes_per_sec": 0, 00:24:17.615 "r_mbytes_per_sec": 0, 00:24:17.615 "w_mbytes_per_sec": 0 00:24:17.615 }, 00:24:17.615 "claimed": false, 00:24:17.615 "zoned": false, 00:24:17.615 "supported_io_types": { 00:24:17.615 "read": true, 00:24:17.615 "write": true, 00:24:17.615 "unmap": false, 00:24:17.615 "flush": true, 00:24:17.615 "reset": true, 00:24:17.615 "nvme_admin": true, 00:24:17.615 "nvme_io": true, 00:24:17.615 "nvme_io_md": false, 00:24:17.616 "write_zeroes": true, 00:24:17.616 "zcopy": false, 00:24:17.616 "get_zone_info": false, 00:24:17.616 "zone_management": false, 00:24:17.616 "zone_append": false, 00:24:17.616 "compare": true, 00:24:17.616 "compare_and_write": true, 00:24:17.616 "abort": true, 00:24:17.616 "seek_hole": false, 00:24:17.616 "seek_data": false, 00:24:17.616 "copy": true, 00:24:17.616 "nvme_iov_md": false 00:24:17.616 }, 00:24:17.616 "memory_domains": [ 00:24:17.616 { 00:24:17.616 "dma_device_id": "system", 00:24:17.616 "dma_device_type": 1 00:24:17.616 } 00:24:17.616 ], 00:24:17.616 "driver_specific": { 00:24:17.616 "nvme": [ 00:24:17.616 { 00:24:17.616 "trid": { 00:24:17.616 "trtype": "TCP", 00:24:17.616 "adrfam": "IPv4", 00:24:17.616 "traddr": "10.0.0.2", 00:24:17.616 "trsvcid": "4421", 00:24:17.616 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:17.616 }, 00:24:17.616 "ctrlr_data": { 00:24:17.616 "cntlid": 3, 00:24:17.616 "vendor_id": "0x8086", 00:24:17.616 "model_number": "SPDK bdev Controller", 00:24:17.616 "serial_number": "00000000000000000000", 00:24:17.616 "firmware_revision": "25.01", 00:24:17.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.616 "oacs": { 00:24:17.616 "security": 0, 00:24:17.616 "format": 0, 00:24:17.616 "firmware": 0, 00:24:17.616 "ns_manage": 0 00:24:17.616 }, 00:24:17.616 "multi_ctrlr": true, 00:24:17.616 "ana_reporting": false 00:24:17.616 }, 00:24:17.616 "vs": { 00:24:17.616 "nvme_version": "1.3" 00:24:17.616 }, 00:24:17.616 "ns_data": { 00:24:17.616 "id": 1, 00:24:17.616 "can_share": true 00:24:17.616 } 00:24:17.616 } 00:24:17.616 ], 00:24:17.616 "mp_policy": "active_passive" 00:24:17.616 } 00:24:17.616 } 00:24:17.616 ] 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.GnQmnCCtUS 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.616 rmmod nvme_tcp 00:24:17.616 rmmod nvme_fabrics 00:24:17.616 rmmod nvme_keyring 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 884837 ']' 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 884837 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 884837 ']' 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 884837 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 884837 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 884837' 00:24:17.616 killing process with pid 884837 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 884837 00:24:17.616 09:00:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 884837 00:24:17.874 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:17.874 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.875 09:00:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:20.475 00:24:20.475 real 0m5.714s 00:24:20.475 user 0m2.173s 00:24:20.475 sys 0m1.967s 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:20.475 ************************************ 00:24:20.475 END TEST nvmf_async_init 00:24:20.475 ************************************ 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.475 ************************************ 00:24:20.475 START TEST dma 00:24:20.475 ************************************ 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:20.475 * Looking for test storage... 00:24:20.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1689 -- # lcov --version 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.475 --rc genhtml_branch_coverage=1 00:24:20.475 --rc genhtml_function_coverage=1 00:24:20.475 --rc genhtml_legend=1 00:24:20.475 --rc geninfo_all_blocks=1 00:24:20.475 --rc geninfo_unexecuted_blocks=1 00:24:20.475 00:24:20.475 ' 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.475 --rc genhtml_branch_coverage=1 00:24:20.475 --rc genhtml_function_coverage=1 00:24:20.475 --rc genhtml_legend=1 00:24:20.475 --rc geninfo_all_blocks=1 00:24:20.475 --rc geninfo_unexecuted_blocks=1 00:24:20.475 00:24:20.475 ' 00:24:20.475 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:20.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.475 --rc genhtml_branch_coverage=1 00:24:20.475 --rc genhtml_function_coverage=1 00:24:20.476 --rc genhtml_legend=1 00:24:20.476 --rc geninfo_all_blocks=1 00:24:20.476 --rc geninfo_unexecuted_blocks=1 00:24:20.476 00:24:20.476 ' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:20.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.476 --rc genhtml_branch_coverage=1 00:24:20.476 --rc genhtml_function_coverage=1 00:24:20.476 --rc genhtml_legend=1 00:24:20.476 --rc geninfo_all_blocks=1 00:24:20.476 --rc geninfo_unexecuted_blocks=1 00:24:20.476 00:24:20.476 ' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:20.476 00:24:20.476 real 0m0.184s 00:24:20.476 user 0m0.123s 00:24:20.476 sys 0m0.070s 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:20.476 ************************************ 00:24:20.476 END TEST dma 00:24:20.476 ************************************ 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.476 ************************************ 00:24:20.476 START TEST nvmf_identify 00:24:20.476 ************************************ 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:20.476 * Looking for test storage... 00:24:20.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # lcov --version 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:20.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.476 --rc genhtml_branch_coverage=1 00:24:20.476 --rc genhtml_function_coverage=1 00:24:20.476 --rc genhtml_legend=1 00:24:20.476 --rc geninfo_all_blocks=1 00:24:20.476 --rc geninfo_unexecuted_blocks=1 00:24:20.476 00:24:20.476 ' 00:24:20.476 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:20.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.476 --rc genhtml_branch_coverage=1 00:24:20.476 --rc genhtml_function_coverage=1 00:24:20.476 --rc genhtml_legend=1 00:24:20.477 --rc geninfo_all_blocks=1 00:24:20.477 --rc geninfo_unexecuted_blocks=1 00:24:20.477 00:24:20.477 ' 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.477 --rc genhtml_branch_coverage=1 00:24:20.477 --rc genhtml_function_coverage=1 00:24:20.477 --rc genhtml_legend=1 00:24:20.477 --rc geninfo_all_blocks=1 00:24:20.477 --rc geninfo_unexecuted_blocks=1 00:24:20.477 00:24:20.477 ' 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:20.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:20.477 --rc genhtml_branch_coverage=1 00:24:20.477 --rc genhtml_function_coverage=1 00:24:20.477 --rc genhtml_legend=1 00:24:20.477 --rc geninfo_all_blocks=1 00:24:20.477 --rc geninfo_unexecuted_blocks=1 00:24:20.477 00:24:20.477 ' 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:20.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:20.477 09:00:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:22.376 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.376 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:22.377 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:22.377 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:22.377 Found net devices under 0000:09:00.0: cvl_0_0 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:22.377 Found net devices under 0000:09:00.1: cvl_0_1 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.377 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:24:22.635 00:24:22.635 --- 10.0.0.2 ping statistics --- 00:24:22.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.635 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:24:22.635 00:24:22.635 --- 10.0.0.1 ping statistics --- 00:24:22.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.635 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=886991 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 886991 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 886991 ']' 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.635 09:00:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:22.635 [2024-11-06 09:00:35.844138] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:24:22.635 [2024-11-06 09:00:35.844211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.635 [2024-11-06 09:00:35.916922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:22.893 [2024-11-06 09:00:35.978224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.893 [2024-11-06 09:00:35.978280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.893 [2024-11-06 09:00:35.978308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.893 [2024-11-06 09:00:35.978320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.893 [2024-11-06 09:00:35.978329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.893 [2024-11-06 09:00:35.979932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.893 [2024-11-06 09:00:35.979993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.893 [2024-11-06 09:00:35.983851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.893 [2024-11-06 09:00:35.983863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:22.893 [2024-11-06 09:00:36.115246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.893 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.152 Malloc0 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.152 [2024-11-06 09:00:36.215082] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.152 [ 00:24:23.152 { 00:24:23.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:23.152 "subtype": "Discovery", 00:24:23.152 "listen_addresses": [ 00:24:23.152 { 00:24:23.152 "trtype": "TCP", 00:24:23.152 "adrfam": "IPv4", 00:24:23.152 "traddr": "10.0.0.2", 00:24:23.152 "trsvcid": "4420" 00:24:23.152 } 00:24:23.152 ], 00:24:23.152 "allow_any_host": true, 00:24:23.152 "hosts": [] 00:24:23.152 }, 00:24:23.152 { 00:24:23.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.152 "subtype": "NVMe", 00:24:23.152 "listen_addresses": [ 00:24:23.152 { 00:24:23.152 "trtype": "TCP", 00:24:23.152 "adrfam": "IPv4", 00:24:23.152 "traddr": "10.0.0.2", 00:24:23.152 "trsvcid": "4420" 00:24:23.152 } 00:24:23.152 ], 00:24:23.152 "allow_any_host": true, 00:24:23.152 "hosts": [], 00:24:23.152 "serial_number": "SPDK00000000000001", 00:24:23.152 "model_number": "SPDK bdev Controller", 00:24:23.152 "max_namespaces": 32, 00:24:23.152 "min_cntlid": 1, 00:24:23.152 "max_cntlid": 65519, 00:24:23.152 "namespaces": [ 00:24:23.152 { 00:24:23.152 "nsid": 1, 00:24:23.152 "bdev_name": "Malloc0", 00:24:23.152 "name": "Malloc0", 00:24:23.152 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:23.152 "eui64": "ABCDEF0123456789", 00:24:23.152 "uuid": "c52487b0-6ac5-4521-8cab-be1b44446586" 00:24:23.152 } 00:24:23.152 ] 00:24:23.152 } 00:24:23.152 ] 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.152 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:23.152 [2024-11-06 09:00:36.258012] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:24:23.152 [2024-11-06 09:00:36.258057] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887014 ] 00:24:23.152 [2024-11-06 09:00:36.307592] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:23.152 [2024-11-06 09:00:36.307662] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:23.152 [2024-11-06 09:00:36.307672] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:23.152 [2024-11-06 09:00:36.307692] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:23.152 [2024-11-06 09:00:36.307705] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:23.152 [2024-11-06 09:00:36.312288] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:23.152 [2024-11-06 09:00:36.312353] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a5d690 0 00:24:23.152 [2024-11-06 09:00:36.312543] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:23.152 [2024-11-06 09:00:36.312561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:23.152 [2024-11-06 09:00:36.312569] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:23.152 [2024-11-06 09:00:36.312575] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:23.152 [2024-11-06 09:00:36.312617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.152 [2024-11-06 09:00:36.312630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.152 [2024-11-06 09:00:36.312638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.152 [2024-11-06 09:00:36.312656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:23.152 [2024-11-06 09:00:36.312681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.152 [2024-11-06 09:00:36.319845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.152 [2024-11-06 09:00:36.319863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.152 [2024-11-06 09:00:36.319871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.152 [2024-11-06 09:00:36.319878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.152 [2024-11-06 09:00:36.319898] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:23.152 [2024-11-06 09:00:36.319910] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:23.152 [2024-11-06 09:00:36.319920] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:23.152 [2024-11-06 09:00:36.319941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.152 [2024-11-06 09:00:36.319950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.152 [2024-11-06 09:00:36.319957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.152 [2024-11-06 09:00:36.319968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.152 [2024-11-06 09:00:36.319991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.152 [2024-11-06 09:00:36.320137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.152 [2024-11-06 09:00:36.320152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.152 [2024-11-06 09:00:36.320159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.152 [2024-11-06 09:00:36.320166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.152 [2024-11-06 09:00:36.320175] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:23.152 [2024-11-06 09:00:36.320188] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:23.152 [2024-11-06 09:00:36.320201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.152 [2024-11-06 09:00:36.320215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.152 [2024-11-06 09:00:36.320222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.152 [2024-11-06 09:00:36.320233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.152 [2024-11-06 09:00:36.320253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.152 [2024-11-06 09:00:36.320336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.152 [2024-11-06 09:00:36.320350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.152 [2024-11-06 09:00:36.320357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.152 [2024-11-06 09:00:36.320363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.153 [2024-11-06 09:00:36.320372] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:23.153 [2024-11-06 09:00:36.320386] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:23.153 [2024-11-06 09:00:36.320398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.320405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.320411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.320422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.153 [2024-11-06 09:00:36.320442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.153 [2024-11-06 09:00:36.320535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.153 [2024-11-06 09:00:36.320546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.153 [2024-11-06 09:00:36.320553] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.320560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.153 [2024-11-06 09:00:36.320569] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:23.153 [2024-11-06 09:00:36.320590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.320600] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.320607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.320617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.153 [2024-11-06 09:00:36.320638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.153 [2024-11-06 09:00:36.320712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.153 [2024-11-06 09:00:36.320726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.153 [2024-11-06 09:00:36.320733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.320739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.153 [2024-11-06 09:00:36.320748] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:23.153 [2024-11-06 09:00:36.320756] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:23.153 [2024-11-06 09:00:36.320769] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:23.153 [2024-11-06 09:00:36.320879] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:23.153 [2024-11-06 09:00:36.320894] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:23.153 [2024-11-06 09:00:36.320909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.320916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.320922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.320933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.153 [2024-11-06 09:00:36.320954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.153 [2024-11-06 09:00:36.321079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.153 [2024-11-06 09:00:36.321091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.153 [2024-11-06 09:00:36.321098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.321105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.153 [2024-11-06 09:00:36.321113] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:23.153 [2024-11-06 09:00:36.321128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.321137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.321144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.321154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.153 [2024-11-06 09:00:36.321174] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.153 [2024-11-06 09:00:36.321282] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.153 [2024-11-06 09:00:36.321296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.153 [2024-11-06 09:00:36.321303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.321310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.153 [2024-11-06 09:00:36.321318] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:23.153 [2024-11-06 09:00:36.321326] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:23.153 [2024-11-06 09:00:36.321339] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:23.153 [2024-11-06 09:00:36.321357] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:23.153 [2024-11-06 09:00:36.321372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.321380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.321391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.153 [2024-11-06 09:00:36.321411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.153 [2024-11-06 09:00:36.321537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.153 [2024-11-06 09:00:36.321552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.153 [2024-11-06 09:00:36.321559] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.321565] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5d690): datao=0, datal=4096, cccid=0 00:24:23.153 [2024-11-06 09:00:36.321577] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abf100) on tqpair(0x1a5d690): expected_datao=0, payload_size=4096 00:24:23.153 [2024-11-06 09:00:36.321586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.321604] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.321614] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.362980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.153 [2024-11-06 09:00:36.362999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.153 [2024-11-06 09:00:36.363006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.153 [2024-11-06 09:00:36.363026] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:23.153 [2024-11-06 09:00:36.363034] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:23.153 [2024-11-06 09:00:36.363042] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:23.153 [2024-11-06 09:00:36.363050] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:23.153 [2024-11-06 09:00:36.363058] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:23.153 [2024-11-06 09:00:36.363066] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:23.153 [2024-11-06 09:00:36.363080] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:23.153 [2024-11-06 09:00:36.363092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363100] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.363118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:23.153 [2024-11-06 09:00:36.363141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.153 [2024-11-06 09:00:36.363232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.153 [2024-11-06 09:00:36.363244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.153 [2024-11-06 09:00:36.363251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.153 [2024-11-06 09:00:36.363276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.363300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.153 [2024-11-06 09:00:36.363310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363323] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.363332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.153 [2024-11-06 09:00:36.363341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.363367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.153 [2024-11-06 09:00:36.363377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.153 [2024-11-06 09:00:36.363389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.153 [2024-11-06 09:00:36.363398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.153 [2024-11-06 09:00:36.363407] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:23.154 [2024-11-06 09:00:36.363421] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:23.154 [2024-11-06 09:00:36.363432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.363454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5d690) 00:24:23.154 [2024-11-06 09:00:36.363465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.154 [2024-11-06 09:00:36.363487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf100, cid 0, qid 0 00:24:23.154 [2024-11-06 09:00:36.363498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf280, cid 1, qid 0 00:24:23.154 [2024-11-06 09:00:36.363522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf400, cid 2, qid 0 00:24:23.154 [2024-11-06 09:00:36.363530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.154 [2024-11-06 09:00:36.363537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf700, cid 4, qid 0 00:24:23.154 [2024-11-06 09:00:36.363676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.154 [2024-11-06 09:00:36.363690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.154 [2024-11-06 09:00:36.363697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.363704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf700) on tqpair=0x1a5d690 00:24:23.154 [2024-11-06 09:00:36.363718] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:23.154 [2024-11-06 09:00:36.363728] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:23.154 [2024-11-06 09:00:36.363746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.363755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5d690) 00:24:23.154 [2024-11-06 09:00:36.363766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.154 [2024-11-06 09:00:36.363786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf700, cid 4, qid 0 00:24:23.154 [2024-11-06 09:00:36.367847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.154 [2024-11-06 09:00:36.367864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.154 [2024-11-06 09:00:36.367871] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.367877] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5d690): datao=0, datal=4096, cccid=4 00:24:23.154 [2024-11-06 09:00:36.367884] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abf700) on tqpair(0x1a5d690): expected_datao=0, payload_size=4096 00:24:23.154 [2024-11-06 09:00:36.367892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.367906] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.367915] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.367923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.154 [2024-11-06 09:00:36.367933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.154 [2024-11-06 09:00:36.367939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.367946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf700) on tqpair=0x1a5d690 00:24:23.154 [2024-11-06 09:00:36.367965] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:23.154 [2024-11-06 09:00:36.368004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.368015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5d690) 00:24:23.154 [2024-11-06 09:00:36.368026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.154 [2024-11-06 09:00:36.368037] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.368044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.368050] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a5d690) 00:24:23.154 [2024-11-06 09:00:36.368058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.154 [2024-11-06 09:00:36.368084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf700, cid 4, qid 0 00:24:23.154 [2024-11-06 09:00:36.368111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf880, cid 5, qid 0 00:24:23.154 [2024-11-06 09:00:36.368253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.154 [2024-11-06 09:00:36.368266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.154 [2024-11-06 09:00:36.368273] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.368279] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5d690): datao=0, datal=1024, cccid=4 00:24:23.154 [2024-11-06 09:00:36.368286] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abf700) on tqpair(0x1a5d690): expected_datao=0, payload_size=1024 00:24:23.154 [2024-11-06 09:00:36.368293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.368303] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.368311] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.368319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.154 [2024-11-06 09:00:36.368328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.154 [2024-11-06 09:00:36.368335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.368342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf880) on tqpair=0x1a5d690 00:24:23.154 [2024-11-06 09:00:36.409915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.154 [2024-11-06 09:00:36.409938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.154 [2024-11-06 09:00:36.409946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.409954] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf700) on tqpair=0x1a5d690 00:24:23.154 [2024-11-06 09:00:36.409972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.409981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5d690) 00:24:23.154 [2024-11-06 09:00:36.409993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.154 [2024-11-06 09:00:36.410025] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf700, cid 4, qid 0 00:24:23.154 [2024-11-06 09:00:36.410113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.154 [2024-11-06 09:00:36.410127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.154 [2024-11-06 09:00:36.410134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.410140] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5d690): datao=0, datal=3072, cccid=4 00:24:23.154 [2024-11-06 09:00:36.410148] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abf700) on tqpair(0x1a5d690): expected_datao=0, payload_size=3072 00:24:23.154 [2024-11-06 09:00:36.410155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.410175] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.154 [2024-11-06 09:00:36.410185] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.416 [2024-11-06 09:00:36.453873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.416 [2024-11-06 09:00:36.453895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.416 [2024-11-06 09:00:36.453903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.416 [2024-11-06 09:00:36.453925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf700) on tqpair=0x1a5d690 00:24:23.416 [2024-11-06 09:00:36.453942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.416 [2024-11-06 09:00:36.453951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a5d690) 00:24:23.416 [2024-11-06 09:00:36.453963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.416 [2024-11-06 09:00:36.453995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf700, cid 4, qid 0 00:24:23.416 [2024-11-06 09:00:36.454085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.416 [2024-11-06 09:00:36.454098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.416 [2024-11-06 09:00:36.454105] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.416 [2024-11-06 09:00:36.454111] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a5d690): datao=0, datal=8, cccid=4 00:24:23.416 [2024-11-06 09:00:36.454119] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1abf700) on tqpair(0x1a5d690): expected_datao=0, payload_size=8 00:24:23.416 [2024-11-06 09:00:36.454126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.416 [2024-11-06 09:00:36.454136] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.416 [2024-11-06 09:00:36.454144] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.416 [2024-11-06 09:00:36.499851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.416 [2024-11-06 09:00:36.499882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.416 [2024-11-06 09:00:36.499890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.416 [2024-11-06 09:00:36.499897] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf700) on tqpair=0x1a5d690 00:24:23.416 ===================================================== 00:24:23.416 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:23.416 ===================================================== 00:24:23.416 Controller Capabilities/Features 00:24:23.416 ================================ 00:24:23.416 Vendor ID: 0000 00:24:23.416 Subsystem Vendor ID: 0000 00:24:23.416 Serial Number: .................... 00:24:23.416 Model Number: ........................................ 00:24:23.416 Firmware Version: 25.01 00:24:23.416 Recommended Arb Burst: 0 00:24:23.416 IEEE OUI Identifier: 00 00 00 00:24:23.416 Multi-path I/O 00:24:23.416 May have multiple subsystem ports: No 00:24:23.416 May have multiple controllers: No 00:24:23.416 Associated with SR-IOV VF: No 00:24:23.416 Max Data Transfer Size: 131072 00:24:23.416 Max Number of Namespaces: 0 00:24:23.416 Max Number of I/O Queues: 1024 00:24:23.416 NVMe Specification Version (VS): 1.3 00:24:23.416 NVMe Specification Version (Identify): 1.3 00:24:23.416 Maximum Queue Entries: 128 00:24:23.416 Contiguous Queues Required: Yes 00:24:23.417 Arbitration Mechanisms Supported 00:24:23.417 Weighted Round Robin: Not Supported 00:24:23.417 Vendor Specific: Not Supported 00:24:23.417 Reset Timeout: 15000 ms 00:24:23.417 Doorbell Stride: 4 bytes 00:24:23.417 NVM Subsystem Reset: Not Supported 00:24:23.417 Command Sets Supported 00:24:23.417 NVM Command Set: Supported 00:24:23.417 Boot Partition: Not Supported 00:24:23.417 Memory Page Size Minimum: 4096 bytes 00:24:23.417 Memory Page Size Maximum: 4096 bytes 00:24:23.417 Persistent Memory Region: Not Supported 00:24:23.417 Optional Asynchronous Events Supported 00:24:23.417 Namespace Attribute Notices: Not Supported 00:24:23.417 Firmware Activation Notices: Not Supported 00:24:23.417 ANA Change Notices: Not Supported 00:24:23.417 PLE Aggregate Log Change Notices: Not Supported 00:24:23.417 LBA Status Info Alert Notices: Not Supported 00:24:23.417 EGE Aggregate Log Change Notices: Not Supported 00:24:23.417 Normal NVM Subsystem Shutdown event: Not Supported 00:24:23.417 Zone Descriptor Change Notices: Not Supported 00:24:23.417 Discovery Log Change Notices: Supported 00:24:23.417 Controller Attributes 00:24:23.417 128-bit Host Identifier: Not Supported 00:24:23.417 Non-Operational Permissive Mode: Not Supported 00:24:23.417 NVM Sets: Not Supported 00:24:23.417 Read Recovery Levels: Not Supported 00:24:23.417 Endurance Groups: Not Supported 00:24:23.417 Predictable Latency Mode: Not Supported 00:24:23.417 Traffic Based Keep ALive: Not Supported 00:24:23.417 Namespace Granularity: Not Supported 00:24:23.417 SQ Associations: Not Supported 00:24:23.417 UUID List: Not Supported 00:24:23.417 Multi-Domain Subsystem: Not Supported 00:24:23.417 Fixed Capacity Management: Not Supported 00:24:23.417 Variable Capacity Management: Not Supported 00:24:23.417 Delete Endurance Group: Not Supported 00:24:23.417 Delete NVM Set: Not Supported 00:24:23.417 Extended LBA Formats Supported: Not Supported 00:24:23.417 Flexible Data Placement Supported: Not Supported 00:24:23.417 00:24:23.417 Controller Memory Buffer Support 00:24:23.417 ================================ 00:24:23.417 Supported: No 00:24:23.417 00:24:23.417 Persistent Memory Region Support 00:24:23.417 ================================ 00:24:23.417 Supported: No 00:24:23.417 00:24:23.417 Admin Command Set Attributes 00:24:23.417 ============================ 00:24:23.417 Security Send/Receive: Not Supported 00:24:23.417 Format NVM: Not Supported 00:24:23.417 Firmware Activate/Download: Not Supported 00:24:23.417 Namespace Management: Not Supported 00:24:23.417 Device Self-Test: Not Supported 00:24:23.417 Directives: Not Supported 00:24:23.417 NVMe-MI: Not Supported 00:24:23.417 Virtualization Management: Not Supported 00:24:23.417 Doorbell Buffer Config: Not Supported 00:24:23.417 Get LBA Status Capability: Not Supported 00:24:23.417 Command & Feature Lockdown Capability: Not Supported 00:24:23.417 Abort Command Limit: 1 00:24:23.417 Async Event Request Limit: 4 00:24:23.417 Number of Firmware Slots: N/A 00:24:23.417 Firmware Slot 1 Read-Only: N/A 00:24:23.417 Firmware Activation Without Reset: N/A 00:24:23.417 Multiple Update Detection Support: N/A 00:24:23.417 Firmware Update Granularity: No Information Provided 00:24:23.417 Per-Namespace SMART Log: No 00:24:23.417 Asymmetric Namespace Access Log Page: Not Supported 00:24:23.417 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:23.417 Command Effects Log Page: Not Supported 00:24:23.417 Get Log Page Extended Data: Supported 00:24:23.417 Telemetry Log Pages: Not Supported 00:24:23.417 Persistent Event Log Pages: Not Supported 00:24:23.417 Supported Log Pages Log Page: May Support 00:24:23.417 Commands Supported & Effects Log Page: Not Supported 00:24:23.417 Feature Identifiers & Effects Log Page:May Support 00:24:23.417 NVMe-MI Commands & Effects Log Page: May Support 00:24:23.417 Data Area 4 for Telemetry Log: Not Supported 00:24:23.417 Error Log Page Entries Supported: 128 00:24:23.417 Keep Alive: Not Supported 00:24:23.417 00:24:23.417 NVM Command Set Attributes 00:24:23.417 ========================== 00:24:23.417 Submission Queue Entry Size 00:24:23.417 Max: 1 00:24:23.417 Min: 1 00:24:23.417 Completion Queue Entry Size 00:24:23.417 Max: 1 00:24:23.417 Min: 1 00:24:23.417 Number of Namespaces: 0 00:24:23.417 Compare Command: Not Supported 00:24:23.417 Write Uncorrectable Command: Not Supported 00:24:23.417 Dataset Management Command: Not Supported 00:24:23.417 Write Zeroes Command: Not Supported 00:24:23.417 Set Features Save Field: Not Supported 00:24:23.417 Reservations: Not Supported 00:24:23.417 Timestamp: Not Supported 00:24:23.417 Copy: Not Supported 00:24:23.417 Volatile Write Cache: Not Present 00:24:23.417 Atomic Write Unit (Normal): 1 00:24:23.417 Atomic Write Unit (PFail): 1 00:24:23.417 Atomic Compare & Write Unit: 1 00:24:23.417 Fused Compare & Write: Supported 00:24:23.417 Scatter-Gather List 00:24:23.417 SGL Command Set: Supported 00:24:23.417 SGL Keyed: Supported 00:24:23.417 SGL Bit Bucket Descriptor: Not Supported 00:24:23.417 SGL Metadata Pointer: Not Supported 00:24:23.417 Oversized SGL: Not Supported 00:24:23.417 SGL Metadata Address: Not Supported 00:24:23.417 SGL Offset: Supported 00:24:23.417 Transport SGL Data Block: Not Supported 00:24:23.417 Replay Protected Memory Block: Not Supported 00:24:23.417 00:24:23.417 Firmware Slot Information 00:24:23.417 ========================= 00:24:23.417 Active slot: 0 00:24:23.417 00:24:23.417 00:24:23.417 Error Log 00:24:23.417 ========= 00:24:23.417 00:24:23.417 Active Namespaces 00:24:23.417 ================= 00:24:23.417 Discovery Log Page 00:24:23.417 ================== 00:24:23.417 Generation Counter: 2 00:24:23.417 Number of Records: 2 00:24:23.417 Record Format: 0 00:24:23.417 00:24:23.417 Discovery Log Entry 0 00:24:23.417 ---------------------- 00:24:23.417 Transport Type: 3 (TCP) 00:24:23.417 Address Family: 1 (IPv4) 00:24:23.417 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:23.417 Entry Flags: 00:24:23.417 Duplicate Returned Information: 1 00:24:23.417 Explicit Persistent Connection Support for Discovery: 1 00:24:23.417 Transport Requirements: 00:24:23.417 Secure Channel: Not Required 00:24:23.417 Port ID: 0 (0x0000) 00:24:23.417 Controller ID: 65535 (0xffff) 00:24:23.417 Admin Max SQ Size: 128 00:24:23.417 Transport Service Identifier: 4420 00:24:23.417 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:23.417 Transport Address: 10.0.0.2 00:24:23.417 Discovery Log Entry 1 00:24:23.417 ---------------------- 00:24:23.417 Transport Type: 3 (TCP) 00:24:23.417 Address Family: 1 (IPv4) 00:24:23.417 Subsystem Type: 2 (NVM Subsystem) 00:24:23.417 Entry Flags: 00:24:23.417 Duplicate Returned Information: 0 00:24:23.417 Explicit Persistent Connection Support for Discovery: 0 00:24:23.417 Transport Requirements: 00:24:23.417 Secure Channel: Not Required 00:24:23.417 Port ID: 0 (0x0000) 00:24:23.417 Controller ID: 65535 (0xffff) 00:24:23.417 Admin Max SQ Size: 128 00:24:23.417 Transport Service Identifier: 4420 00:24:23.417 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:23.417 Transport Address: 10.0.0.2 [2024-11-06 09:00:36.500021] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:23.417 [2024-11-06 09:00:36.500043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf100) on tqpair=0x1a5d690 00:24:23.417 [2024-11-06 09:00:36.500056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.417 [2024-11-06 09:00:36.500065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf280) on tqpair=0x1a5d690 00:24:23.417 [2024-11-06 09:00:36.500073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.417 [2024-11-06 09:00:36.500081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf400) on tqpair=0x1a5d690 00:24:23.417 [2024-11-06 09:00:36.500089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.417 [2024-11-06 09:00:36.500097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.417 [2024-11-06 09:00:36.500109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.417 [2024-11-06 09:00:36.500122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.417 [2024-11-06 09:00:36.500130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.500163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.500189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.500308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.500322] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.500329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.418 [2024-11-06 09:00:36.500353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.500380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.500408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.500499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.500514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.500521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.418 [2024-11-06 09:00:36.500536] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:23.418 [2024-11-06 09:00:36.500544] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:23.418 [2024-11-06 09:00:36.500560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.500586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.500607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.500684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.500696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.500703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.418 [2024-11-06 09:00:36.500726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500742] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.500753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.500773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.500862] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.500878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.500885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.418 [2024-11-06 09:00:36.500908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.500924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.500935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.500955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.501031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.501045] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.501053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.418 [2024-11-06 09:00:36.501076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.501102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.501122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.501195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.501207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.501214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.418 [2024-11-06 09:00:36.501237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.501263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.501283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.501364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.501377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.501384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.418 [2024-11-06 09:00:36.501406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.501432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.501453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.501521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.501537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.501545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.418 [2024-11-06 09:00:36.501568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501577] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.501595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.501615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.501688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.501700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.501707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.418 [2024-11-06 09:00:36.501729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.418 [2024-11-06 09:00:36.501744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.418 [2024-11-06 09:00:36.501755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.418 [2024-11-06 09:00:36.501774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.418 [2024-11-06 09:00:36.501860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.418 [2024-11-06 09:00:36.501876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.418 [2024-11-06 09:00:36.501883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.501890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.501906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.501915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.501921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.501932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.501952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.502025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.502039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.502046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.502069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.502095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.502115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.502189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.502203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.502214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.502238] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.502264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.502284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.502356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.502368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.502376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.502399] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.502425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.502445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.502518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.502531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.502538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.502561] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.502588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.502608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.502684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.502698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.502705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502711] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.502727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.502754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.502774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.502857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.502872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.502879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.502907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.502923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.502934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.502955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.503044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.503056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.503063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.503086] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.503112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.503132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.503205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.503219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.503226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.503249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.503276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.503295] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.503364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.503376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.503383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.503405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.503432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.503452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.503530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.503544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.419 [2024-11-06 09:00:36.503551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.419 [2024-11-06 09:00:36.503578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.419 [2024-11-06 09:00:36.503595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.419 [2024-11-06 09:00:36.503605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.419 [2024-11-06 09:00:36.503625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.419 [2024-11-06 09:00:36.503699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.419 [2024-11-06 09:00:36.503712] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.420 [2024-11-06 09:00:36.503719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.503726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.420 [2024-11-06 09:00:36.503742] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.503751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.503758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.420 [2024-11-06 09:00:36.503768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.420 [2024-11-06 09:00:36.503788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.420 [2024-11-06 09:00:36.507847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.420 [2024-11-06 09:00:36.507865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.420 [2024-11-06 09:00:36.507872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.507880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.420 [2024-11-06 09:00:36.507897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.507907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.507914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a5d690) 00:24:23.420 [2024-11-06 09:00:36.507925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.420 [2024-11-06 09:00:36.507947] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1abf580, cid 3, qid 0 00:24:23.420 [2024-11-06 09:00:36.508051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.420 [2024-11-06 09:00:36.508065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.420 [2024-11-06 09:00:36.508072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.508079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1abf580) on tqpair=0x1a5d690 00:24:23.420 [2024-11-06 09:00:36.508092] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:24:23.420 00:24:23.420 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:23.420 [2024-11-06 09:00:36.545295] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:24:23.420 [2024-11-06 09:00:36.545344] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887132 ] 00:24:23.420 [2024-11-06 09:00:36.595435] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:23.420 [2024-11-06 09:00:36.595495] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:23.420 [2024-11-06 09:00:36.595506] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:23.420 [2024-11-06 09:00:36.595524] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:23.420 [2024-11-06 09:00:36.595535] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:23.420 [2024-11-06 09:00:36.599148] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:23.420 [2024-11-06 09:00:36.599186] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e04690 0 00:24:23.420 [2024-11-06 09:00:36.605841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:23.420 [2024-11-06 09:00:36.605862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:23.420 [2024-11-06 09:00:36.605871] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:23.420 [2024-11-06 09:00:36.605877] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:23.420 [2024-11-06 09:00:36.605910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.605922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.605929] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.420 [2024-11-06 09:00:36.605942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:23.420 [2024-11-06 09:00:36.605969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.420 [2024-11-06 09:00:36.613848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.420 [2024-11-06 09:00:36.613867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.420 [2024-11-06 09:00:36.613878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.613885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.420 [2024-11-06 09:00:36.613899] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:23.420 [2024-11-06 09:00:36.613910] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:23.420 [2024-11-06 09:00:36.613919] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:23.420 [2024-11-06 09:00:36.613943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.613952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.613959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.420 [2024-11-06 09:00:36.613970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.420 [2024-11-06 09:00:36.613995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.420 [2024-11-06 09:00:36.614105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.420 [2024-11-06 09:00:36.614117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.420 [2024-11-06 09:00:36.614124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.420 [2024-11-06 09:00:36.614139] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:23.420 [2024-11-06 09:00:36.614152] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:23.420 [2024-11-06 09:00:36.614164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.420 [2024-11-06 09:00:36.614193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.420 [2024-11-06 09:00:36.614215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.420 [2024-11-06 09:00:36.614292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.420 [2024-11-06 09:00:36.614306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.420 [2024-11-06 09:00:36.614313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.420 [2024-11-06 09:00:36.614328] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:23.420 [2024-11-06 09:00:36.614342] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:23.420 [2024-11-06 09:00:36.614354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.420 [2024-11-06 09:00:36.614377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.420 [2024-11-06 09:00:36.614398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.420 [2024-11-06 09:00:36.614474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.420 [2024-11-06 09:00:36.614488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.420 [2024-11-06 09:00:36.614494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.420 [2024-11-06 09:00:36.614509] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:23.420 [2024-11-06 09:00:36.614530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.420 [2024-11-06 09:00:36.614556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.420 [2024-11-06 09:00:36.614577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.420 [2024-11-06 09:00:36.614648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.420 [2024-11-06 09:00:36.614660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.420 [2024-11-06 09:00:36.614666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.420 [2024-11-06 09:00:36.614673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.420 [2024-11-06 09:00:36.614680] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:23.420 [2024-11-06 09:00:36.614688] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:23.420 [2024-11-06 09:00:36.614701] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:23.420 [2024-11-06 09:00:36.614811] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:23.421 [2024-11-06 09:00:36.614819] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:23.421 [2024-11-06 09:00:36.614846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.614857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.614864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.421 [2024-11-06 09:00:36.614874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.421 [2024-11-06 09:00:36.614896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.421 [2024-11-06 09:00:36.615005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.421 [2024-11-06 09:00:36.615019] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.421 [2024-11-06 09:00:36.615026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.421 [2024-11-06 09:00:36.615040] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:23.421 [2024-11-06 09:00:36.615056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615064] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.421 [2024-11-06 09:00:36.615081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.421 [2024-11-06 09:00:36.615101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.421 [2024-11-06 09:00:36.615176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.421 [2024-11-06 09:00:36.615189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.421 [2024-11-06 09:00:36.615196] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.421 [2024-11-06 09:00:36.615210] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:23.421 [2024-11-06 09:00:36.615218] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:23.421 [2024-11-06 09:00:36.615231] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:23.421 [2024-11-06 09:00:36.615245] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:23.421 [2024-11-06 09:00:36.615259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.421 [2024-11-06 09:00:36.615277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.421 [2024-11-06 09:00:36.615298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.421 [2024-11-06 09:00:36.615419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.421 [2024-11-06 09:00:36.615434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.421 [2024-11-06 09:00:36.615441] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615447] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e04690): datao=0, datal=4096, cccid=0 00:24:23.421 [2024-11-06 09:00:36.615454] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e66100) on tqpair(0x1e04690): expected_datao=0, payload_size=4096 00:24:23.421 [2024-11-06 09:00:36.615465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615477] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615484] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615496] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.421 [2024-11-06 09:00:36.615506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.421 [2024-11-06 09:00:36.615513] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.421 [2024-11-06 09:00:36.615531] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:23.421 [2024-11-06 09:00:36.615539] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:23.421 [2024-11-06 09:00:36.615547] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:23.421 [2024-11-06 09:00:36.615554] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:23.421 [2024-11-06 09:00:36.615561] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:23.421 [2024-11-06 09:00:36.615569] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:23.421 [2024-11-06 09:00:36.615583] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:23.421 [2024-11-06 09:00:36.615595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.421 [2024-11-06 09:00:36.615619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:23.421 [2024-11-06 09:00:36.615641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.421 [2024-11-06 09:00:36.615716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.421 [2024-11-06 09:00:36.615730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.421 [2024-11-06 09:00:36.615737] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.421 [2024-11-06 09:00:36.615757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e04690) 00:24:23.421 [2024-11-06 09:00:36.615782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.421 [2024-11-06 09:00:36.615791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e04690) 00:24:23.421 [2024-11-06 09:00:36.615813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.421 [2024-11-06 09:00:36.615822] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615829] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e04690) 00:24:23.421 [2024-11-06 09:00:36.615853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.421 [2024-11-06 09:00:36.615880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.421 [2024-11-06 09:00:36.615902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.421 [2024-11-06 09:00:36.615911] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:23.421 [2024-11-06 09:00:36.615926] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:23.421 [2024-11-06 09:00:36.615937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.421 [2024-11-06 09:00:36.615943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e04690) 00:24:23.422 [2024-11-06 09:00:36.615953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.422 [2024-11-06 09:00:36.615976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66100, cid 0, qid 0 00:24:23.422 [2024-11-06 09:00:36.615987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66280, cid 1, qid 0 00:24:23.422 [2024-11-06 09:00:36.615995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66400, cid 2, qid 0 00:24:23.422 [2024-11-06 09:00:36.616002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.422 [2024-11-06 09:00:36.616010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66700, cid 4, qid 0 00:24:23.422 [2024-11-06 09:00:36.616155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.422 [2024-11-06 09:00:36.616169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.422 [2024-11-06 09:00:36.616176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66700) on tqpair=0x1e04690 00:24:23.422 [2024-11-06 09:00:36.616194] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:23.422 [2024-11-06 09:00:36.616204] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.616218] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.616228] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.616239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e04690) 00:24:23.422 [2024-11-06 09:00:36.616262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:23.422 [2024-11-06 09:00:36.616282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66700, cid 4, qid 0 00:24:23.422 [2024-11-06 09:00:36.616395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.422 [2024-11-06 09:00:36.616407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.422 [2024-11-06 09:00:36.616413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66700) on tqpair=0x1e04690 00:24:23.422 [2024-11-06 09:00:36.616488] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.616512] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.616526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e04690) 00:24:23.422 [2024-11-06 09:00:36.616544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.422 [2024-11-06 09:00:36.616565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66700, cid 4, qid 0 00:24:23.422 [2024-11-06 09:00:36.616651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.422 [2024-11-06 09:00:36.616663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.422 [2024-11-06 09:00:36.616669] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616675] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e04690): datao=0, datal=4096, cccid=4 00:24:23.422 [2024-11-06 09:00:36.616683] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e66700) on tqpair(0x1e04690): expected_datao=0, payload_size=4096 00:24:23.422 [2024-11-06 09:00:36.616690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616706] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616714] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.422 [2024-11-06 09:00:36.616736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.422 [2024-11-06 09:00:36.616742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66700) on tqpair=0x1e04690 00:24:23.422 [2024-11-06 09:00:36.616764] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:23.422 [2024-11-06 09:00:36.616785] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.616803] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.616816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.616823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e04690) 00:24:23.422 [2024-11-06 09:00:36.616842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.422 [2024-11-06 09:00:36.616867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66700, cid 4, qid 0 00:24:23.422 [2024-11-06 09:00:36.616980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.422 [2024-11-06 09:00:36.616994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.422 [2024-11-06 09:00:36.617001] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.617007] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e04690): datao=0, datal=4096, cccid=4 00:24:23.422 [2024-11-06 09:00:36.617014] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e66700) on tqpair(0x1e04690): expected_datao=0, payload_size=4096 00:24:23.422 [2024-11-06 09:00:36.617021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.617037] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.617046] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.617057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.422 [2024-11-06 09:00:36.617067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.422 [2024-11-06 09:00:36.617077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.617084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66700) on tqpair=0x1e04690 00:24:23.422 [2024-11-06 09:00:36.617105] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.617123] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.617137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.617144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e04690) 00:24:23.422 [2024-11-06 09:00:36.617155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.422 [2024-11-06 09:00:36.617176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66700, cid 4, qid 0 00:24:23.422 [2024-11-06 09:00:36.617270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.422 [2024-11-06 09:00:36.617283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.422 [2024-11-06 09:00:36.617290] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.617296] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e04690): datao=0, datal=4096, cccid=4 00:24:23.422 [2024-11-06 09:00:36.617303] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e66700) on tqpair(0x1e04690): expected_datao=0, payload_size=4096 00:24:23.422 [2024-11-06 09:00:36.617310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.617320] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.617328] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.661845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.422 [2024-11-06 09:00:36.661864] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.422 [2024-11-06 09:00:36.661871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.661893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66700) on tqpair=0x1e04690 00:24:23.422 [2024-11-06 09:00:36.661908] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.661923] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.661939] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.661950] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.661959] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.661968] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.661977] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:23.422 [2024-11-06 09:00:36.661984] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:23.422 [2024-11-06 09:00:36.661993] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:23.422 [2024-11-06 09:00:36.662014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.662022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e04690) 00:24:23.422 [2024-11-06 09:00:36.662037] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.422 [2024-11-06 09:00:36.662049] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.422 [2024-11-06 09:00:36.662056] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e04690) 00:24:23.423 [2024-11-06 09:00:36.662072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:23.423 [2024-11-06 09:00:36.662098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66700, cid 4, qid 0 00:24:23.423 [2024-11-06 09:00:36.662110] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66880, cid 5, qid 0 00:24:23.423 [2024-11-06 09:00:36.662199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.423 [2024-11-06 09:00:36.662213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.423 [2024-11-06 09:00:36.662220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66700) on tqpair=0x1e04690 00:24:23.423 [2024-11-06 09:00:36.662237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.423 [2024-11-06 09:00:36.662246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.423 [2024-11-06 09:00:36.662253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66880) on tqpair=0x1e04690 00:24:23.423 [2024-11-06 09:00:36.662274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e04690) 00:24:23.423 [2024-11-06 09:00:36.662294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.423 [2024-11-06 09:00:36.662315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66880, cid 5, qid 0 00:24:23.423 [2024-11-06 09:00:36.662395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.423 [2024-11-06 09:00:36.662407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.423 [2024-11-06 09:00:36.662414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66880) on tqpair=0x1e04690 00:24:23.423 [2024-11-06 09:00:36.662435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e04690) 00:24:23.423 [2024-11-06 09:00:36.662454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.423 [2024-11-06 09:00:36.662474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66880, cid 5, qid 0 00:24:23.423 [2024-11-06 09:00:36.662552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.423 [2024-11-06 09:00:36.662566] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.423 [2024-11-06 09:00:36.662573] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66880) on tqpair=0x1e04690 00:24:23.423 [2024-11-06 09:00:36.662594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e04690) 00:24:23.423 [2024-11-06 09:00:36.662613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.423 [2024-11-06 09:00:36.662633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66880, cid 5, qid 0 00:24:23.423 [2024-11-06 09:00:36.662707] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.423 [2024-11-06 09:00:36.662723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.423 [2024-11-06 09:00:36.662731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66880) on tqpair=0x1e04690 00:24:23.423 [2024-11-06 09:00:36.662763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e04690) 00:24:23.423 [2024-11-06 09:00:36.662784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.423 [2024-11-06 09:00:36.662796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662803] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e04690) 00:24:23.423 [2024-11-06 09:00:36.662812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.423 [2024-11-06 09:00:36.662824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e04690) 00:24:23.423 [2024-11-06 09:00:36.662850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.423 [2024-11-06 09:00:36.662866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.662875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e04690) 00:24:23.423 [2024-11-06 09:00:36.662884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.423 [2024-11-06 09:00:36.662907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66880, cid 5, qid 0 00:24:23.423 [2024-11-06 09:00:36.662918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66700, cid 4, qid 0 00:24:23.423 [2024-11-06 09:00:36.662925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66a00, cid 6, qid 0 00:24:23.423 [2024-11-06 09:00:36.662932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66b80, cid 7, qid 0 00:24:23.423 [2024-11-06 09:00:36.663115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.423 [2024-11-06 09:00:36.663127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.423 [2024-11-06 09:00:36.663134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663140] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e04690): datao=0, datal=8192, cccid=5 00:24:23.423 [2024-11-06 09:00:36.663148] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e66880) on tqpair(0x1e04690): expected_datao=0, payload_size=8192 00:24:23.423 [2024-11-06 09:00:36.663155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663173] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663182] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.423 [2024-11-06 09:00:36.663204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.423 [2024-11-06 09:00:36.663211] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663217] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e04690): datao=0, datal=512, cccid=4 00:24:23.423 [2024-11-06 09:00:36.663224] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e66700) on tqpair(0x1e04690): expected_datao=0, payload_size=512 00:24:23.423 [2024-11-06 09:00:36.663231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663245] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663253] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.423 [2024-11-06 09:00:36.663271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.423 [2024-11-06 09:00:36.663277] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e04690): datao=0, datal=512, cccid=6 00:24:23.423 [2024-11-06 09:00:36.663291] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e66a00) on tqpair(0x1e04690): expected_datao=0, payload_size=512 00:24:23.423 [2024-11-06 09:00:36.663298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663307] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663314] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:23.423 [2024-11-06 09:00:36.663331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:23.423 [2024-11-06 09:00:36.663338] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663344] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e04690): datao=0, datal=4096, cccid=7 00:24:23.423 [2024-11-06 09:00:36.663351] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e66b80) on tqpair(0x1e04690): expected_datao=0, payload_size=4096 00:24:23.423 [2024-11-06 09:00:36.663359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663368] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663375] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663387] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.423 [2024-11-06 09:00:36.663396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.423 [2024-11-06 09:00:36.663403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.423 [2024-11-06 09:00:36.663409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66880) on tqpair=0x1e04690 00:24:23.423 [2024-11-06 09:00:36.663430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.423 [2024-11-06 09:00:36.663442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.423 [2024-11-06 09:00:36.663448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.424 [2024-11-06 09:00:36.663455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66700) on tqpair=0x1e04690 00:24:23.424 [2024-11-06 09:00:36.663485] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.424 [2024-11-06 09:00:36.663496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.424 [2024-11-06 09:00:36.663503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.424 [2024-11-06 09:00:36.663509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66a00) on tqpair=0x1e04690 00:24:23.424 [2024-11-06 09:00:36.663519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.424 [2024-11-06 09:00:36.663529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.424 [2024-11-06 09:00:36.663549] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.424 [2024-11-06 09:00:36.663555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66b80) on tqpair=0x1e04690 00:24:23.424 ===================================================== 00:24:23.424 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:23.424 ===================================================== 00:24:23.424 Controller Capabilities/Features 00:24:23.424 ================================ 00:24:23.424 Vendor ID: 8086 00:24:23.424 Subsystem Vendor ID: 8086 00:24:23.424 Serial Number: SPDK00000000000001 00:24:23.424 Model Number: SPDK bdev Controller 00:24:23.424 Firmware Version: 25.01 00:24:23.424 Recommended Arb Burst: 6 00:24:23.424 IEEE OUI Identifier: e4 d2 5c 00:24:23.424 Multi-path I/O 00:24:23.424 May have multiple subsystem ports: Yes 00:24:23.424 May have multiple controllers: Yes 00:24:23.424 Associated with SR-IOV VF: No 00:24:23.424 Max Data Transfer Size: 131072 00:24:23.424 Max Number of Namespaces: 32 00:24:23.424 Max Number of I/O Queues: 127 00:24:23.424 NVMe Specification Version (VS): 1.3 00:24:23.424 NVMe Specification Version (Identify): 1.3 00:24:23.424 Maximum Queue Entries: 128 00:24:23.424 Contiguous Queues Required: Yes 00:24:23.424 Arbitration Mechanisms Supported 00:24:23.424 Weighted Round Robin: Not Supported 00:24:23.424 Vendor Specific: Not Supported 00:24:23.424 Reset Timeout: 15000 ms 00:24:23.424 Doorbell Stride: 4 bytes 00:24:23.424 NVM Subsystem Reset: Not Supported 00:24:23.424 Command Sets Supported 00:24:23.424 NVM Command Set: Supported 00:24:23.424 Boot Partition: Not Supported 00:24:23.424 Memory Page Size Minimum: 4096 bytes 00:24:23.424 Memory Page Size Maximum: 4096 bytes 00:24:23.424 Persistent Memory Region: Not Supported 00:24:23.424 Optional Asynchronous Events Supported 00:24:23.424 Namespace Attribute Notices: Supported 00:24:23.424 Firmware Activation Notices: Not Supported 00:24:23.424 ANA Change Notices: Not Supported 00:24:23.424 PLE Aggregate Log Change Notices: Not Supported 00:24:23.424 LBA Status Info Alert Notices: Not Supported 00:24:23.424 EGE Aggregate Log Change Notices: Not Supported 00:24:23.424 Normal NVM Subsystem Shutdown event: Not Supported 00:24:23.424 Zone Descriptor Change Notices: Not Supported 00:24:23.424 Discovery Log Change Notices: Not Supported 00:24:23.424 Controller Attributes 00:24:23.424 128-bit Host Identifier: Supported 00:24:23.424 Non-Operational Permissive Mode: Not Supported 00:24:23.424 NVM Sets: Not Supported 00:24:23.424 Read Recovery Levels: Not Supported 00:24:23.424 Endurance Groups: Not Supported 00:24:23.424 Predictable Latency Mode: Not Supported 00:24:23.424 Traffic Based Keep ALive: Not Supported 00:24:23.424 Namespace Granularity: Not Supported 00:24:23.424 SQ Associations: Not Supported 00:24:23.424 UUID List: Not Supported 00:24:23.424 Multi-Domain Subsystem: Not Supported 00:24:23.424 Fixed Capacity Management: Not Supported 00:24:23.424 Variable Capacity Management: Not Supported 00:24:23.424 Delete Endurance Group: Not Supported 00:24:23.424 Delete NVM Set: Not Supported 00:24:23.424 Extended LBA Formats Supported: Not Supported 00:24:23.424 Flexible Data Placement Supported: Not Supported 00:24:23.424 00:24:23.424 Controller Memory Buffer Support 00:24:23.424 ================================ 00:24:23.424 Supported: No 00:24:23.424 00:24:23.424 Persistent Memory Region Support 00:24:23.424 ================================ 00:24:23.424 Supported: No 00:24:23.424 00:24:23.424 Admin Command Set Attributes 00:24:23.424 ============================ 00:24:23.424 Security Send/Receive: Not Supported 00:24:23.424 Format NVM: Not Supported 00:24:23.424 Firmware Activate/Download: Not Supported 00:24:23.424 Namespace Management: Not Supported 00:24:23.424 Device Self-Test: Not Supported 00:24:23.424 Directives: Not Supported 00:24:23.424 NVMe-MI: Not Supported 00:24:23.424 Virtualization Management: Not Supported 00:24:23.424 Doorbell Buffer Config: Not Supported 00:24:23.424 Get LBA Status Capability: Not Supported 00:24:23.424 Command & Feature Lockdown Capability: Not Supported 00:24:23.424 Abort Command Limit: 4 00:24:23.424 Async Event Request Limit: 4 00:24:23.424 Number of Firmware Slots: N/A 00:24:23.424 Firmware Slot 1 Read-Only: N/A 00:24:23.424 Firmware Activation Without Reset: N/A 00:24:23.424 Multiple Update Detection Support: N/A 00:24:23.424 Firmware Update Granularity: No Information Provided 00:24:23.424 Per-Namespace SMART Log: No 00:24:23.424 Asymmetric Namespace Access Log Page: Not Supported 00:24:23.424 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:23.424 Command Effects Log Page: Supported 00:24:23.424 Get Log Page Extended Data: Supported 00:24:23.424 Telemetry Log Pages: Not Supported 00:24:23.424 Persistent Event Log Pages: Not Supported 00:24:23.424 Supported Log Pages Log Page: May Support 00:24:23.424 Commands Supported & Effects Log Page: Not Supported 00:24:23.424 Feature Identifiers & Effects Log Page:May Support 00:24:23.424 NVMe-MI Commands & Effects Log Page: May Support 00:24:23.424 Data Area 4 for Telemetry Log: Not Supported 00:24:23.424 Error Log Page Entries Supported: 128 00:24:23.424 Keep Alive: Supported 00:24:23.424 Keep Alive Granularity: 10000 ms 00:24:23.424 00:24:23.424 NVM Command Set Attributes 00:24:23.424 ========================== 00:24:23.424 Submission Queue Entry Size 00:24:23.424 Max: 64 00:24:23.424 Min: 64 00:24:23.424 Completion Queue Entry Size 00:24:23.424 Max: 16 00:24:23.424 Min: 16 00:24:23.424 Number of Namespaces: 32 00:24:23.424 Compare Command: Supported 00:24:23.424 Write Uncorrectable Command: Not Supported 00:24:23.424 Dataset Management Command: Supported 00:24:23.424 Write Zeroes Command: Supported 00:24:23.424 Set Features Save Field: Not Supported 00:24:23.424 Reservations: Supported 00:24:23.424 Timestamp: Not Supported 00:24:23.424 Copy: Supported 00:24:23.424 Volatile Write Cache: Present 00:24:23.424 Atomic Write Unit (Normal): 1 00:24:23.424 Atomic Write Unit (PFail): 1 00:24:23.424 Atomic Compare & Write Unit: 1 00:24:23.424 Fused Compare & Write: Supported 00:24:23.424 Scatter-Gather List 00:24:23.424 SGL Command Set: Supported 00:24:23.424 SGL Keyed: Supported 00:24:23.424 SGL Bit Bucket Descriptor: Not Supported 00:24:23.424 SGL Metadata Pointer: Not Supported 00:24:23.424 Oversized SGL: Not Supported 00:24:23.424 SGL Metadata Address: Not Supported 00:24:23.424 SGL Offset: Supported 00:24:23.424 Transport SGL Data Block: Not Supported 00:24:23.424 Replay Protected Memory Block: Not Supported 00:24:23.424 00:24:23.424 Firmware Slot Information 00:24:23.424 ========================= 00:24:23.424 Active slot: 1 00:24:23.424 Slot 1 Firmware Revision: 25.01 00:24:23.424 00:24:23.424 00:24:23.424 Commands Supported and Effects 00:24:23.424 ============================== 00:24:23.424 Admin Commands 00:24:23.424 -------------- 00:24:23.424 Get Log Page (02h): Supported 00:24:23.424 Identify (06h): Supported 00:24:23.424 Abort (08h): Supported 00:24:23.424 Set Features (09h): Supported 00:24:23.424 Get Features (0Ah): Supported 00:24:23.424 Asynchronous Event Request (0Ch): Supported 00:24:23.424 Keep Alive (18h): Supported 00:24:23.424 I/O Commands 00:24:23.424 ------------ 00:24:23.424 Flush (00h): Supported LBA-Change 00:24:23.424 Write (01h): Supported LBA-Change 00:24:23.424 Read (02h): Supported 00:24:23.424 Compare (05h): Supported 00:24:23.424 Write Zeroes (08h): Supported LBA-Change 00:24:23.425 Dataset Management (09h): Supported LBA-Change 00:24:23.425 Copy (19h): Supported LBA-Change 00:24:23.425 00:24:23.425 Error Log 00:24:23.425 ========= 00:24:23.425 00:24:23.425 Arbitration 00:24:23.425 =========== 00:24:23.425 Arbitration Burst: 1 00:24:23.425 00:24:23.425 Power Management 00:24:23.425 ================ 00:24:23.425 Number of Power States: 1 00:24:23.425 Current Power State: Power State #0 00:24:23.425 Power State #0: 00:24:23.425 Max Power: 0.00 W 00:24:23.425 Non-Operational State: Operational 00:24:23.425 Entry Latency: Not Reported 00:24:23.425 Exit Latency: Not Reported 00:24:23.425 Relative Read Throughput: 0 00:24:23.425 Relative Read Latency: 0 00:24:23.425 Relative Write Throughput: 0 00:24:23.425 Relative Write Latency: 0 00:24:23.425 Idle Power: Not Reported 00:24:23.425 Active Power: Not Reported 00:24:23.425 Non-Operational Permissive Mode: Not Supported 00:24:23.425 00:24:23.425 Health Information 00:24:23.425 ================== 00:24:23.425 Critical Warnings: 00:24:23.425 Available Spare Space: OK 00:24:23.425 Temperature: OK 00:24:23.425 Device Reliability: OK 00:24:23.425 Read Only: No 00:24:23.425 Volatile Memory Backup: OK 00:24:23.425 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:23.425 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:23.425 Available Spare: 0% 00:24:23.425 Available Spare Threshold: 0% 00:24:23.425 Life Percentage Used:[2024-11-06 09:00:36.663680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.663692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e04690) 00:24:23.425 [2024-11-06 09:00:36.663703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.425 [2024-11-06 09:00:36.663724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66b80, cid 7, qid 0 00:24:23.425 [2024-11-06 09:00:36.663851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.425 [2024-11-06 09:00:36.663869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.425 [2024-11-06 09:00:36.663877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.663884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66b80) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.663932] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:23.425 [2024-11-06 09:00:36.663951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66100) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.663961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.425 [2024-11-06 09:00:36.663970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66280) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.663977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.425 [2024-11-06 09:00:36.663985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66400) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.663992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.425 [2024-11-06 09:00:36.664000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.664007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.425 [2024-11-06 09:00:36.664019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.425 [2024-11-06 09:00:36.664044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.425 [2024-11-06 09:00:36.664066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.425 [2024-11-06 09:00:36.664152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.425 [2024-11-06 09:00:36.664166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.425 [2024-11-06 09:00:36.664173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.664191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.425 [2024-11-06 09:00:36.664215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.425 [2024-11-06 09:00:36.664241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.425 [2024-11-06 09:00:36.664330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.425 [2024-11-06 09:00:36.664342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.425 [2024-11-06 09:00:36.664348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.664363] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:23.425 [2024-11-06 09:00:36.664370] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:23.425 [2024-11-06 09:00:36.664385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.425 [2024-11-06 09:00:36.664415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.425 [2024-11-06 09:00:36.664435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.425 [2024-11-06 09:00:36.664505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.425 [2024-11-06 09:00:36.664517] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.425 [2024-11-06 09:00:36.664524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.664546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.425 [2024-11-06 09:00:36.664571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.425 [2024-11-06 09:00:36.664590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.425 [2024-11-06 09:00:36.664662] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.425 [2024-11-06 09:00:36.664673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.425 [2024-11-06 09:00:36.664680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664687] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.664702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.425 [2024-11-06 09:00:36.664727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.425 [2024-11-06 09:00:36.664746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.425 [2024-11-06 09:00:36.664821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.425 [2024-11-06 09:00:36.664842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.425 [2024-11-06 09:00:36.664850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664857] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.664873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.664888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.425 [2024-11-06 09:00:36.664899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.425 [2024-11-06 09:00:36.664919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.425 [2024-11-06 09:00:36.664997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.425 [2024-11-06 09:00:36.665011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.425 [2024-11-06 09:00:36.665017] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.665024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.665040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.665049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.665055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.425 [2024-11-06 09:00:36.665069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.425 [2024-11-06 09:00:36.665090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.425 [2024-11-06 09:00:36.665163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.425 [2024-11-06 09:00:36.665177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.425 [2024-11-06 09:00:36.665183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.665190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.425 [2024-11-06 09:00:36.665206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.425 [2024-11-06 09:00:36.665214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.426 [2024-11-06 09:00:36.665231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.426 [2024-11-06 09:00:36.665251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.426 [2024-11-06 09:00:36.665323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.426 [2024-11-06 09:00:36.665335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.426 [2024-11-06 09:00:36.665341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.426 [2024-11-06 09:00:36.665363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.426 [2024-11-06 09:00:36.665388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.426 [2024-11-06 09:00:36.665408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.426 [2024-11-06 09:00:36.665500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.426 [2024-11-06 09:00:36.665512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.426 [2024-11-06 09:00:36.665519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665526] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.426 [2024-11-06 09:00:36.665541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.426 [2024-11-06 09:00:36.665566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.426 [2024-11-06 09:00:36.665586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.426 [2024-11-06 09:00:36.665659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.426 [2024-11-06 09:00:36.665672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.426 [2024-11-06 09:00:36.665679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.426 [2024-11-06 09:00:36.665701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.665716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.426 [2024-11-06 09:00:36.665726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.426 [2024-11-06 09:00:36.665753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.426 [2024-11-06 09:00:36.665820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.426 [2024-11-06 09:00:36.669840] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.426 [2024-11-06 09:00:36.669854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.669861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.426 [2024-11-06 09:00:36.669893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.669903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.669909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e04690) 00:24:23.426 [2024-11-06 09:00:36.669919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.426 [2024-11-06 09:00:36.669942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e66580, cid 3, qid 0 00:24:23.426 [2024-11-06 09:00:36.670052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:23.426 [2024-11-06 09:00:36.670064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:23.426 [2024-11-06 09:00:36.670071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:23.426 [2024-11-06 09:00:36.670078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e66580) on tqpair=0x1e04690 00:24:23.426 [2024-11-06 09:00:36.670090] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:24:23.426 0% 00:24:23.426 Data Units Read: 0 00:24:23.426 Data Units Written: 0 00:24:23.426 Host Read Commands: 0 00:24:23.426 Host Write Commands: 0 00:24:23.426 Controller Busy Time: 0 minutes 00:24:23.426 Power Cycles: 0 00:24:23.426 Power On Hours: 0 hours 00:24:23.426 Unsafe Shutdowns: 0 00:24:23.426 Unrecoverable Media Errors: 0 00:24:23.426 Lifetime Error Log Entries: 0 00:24:23.426 Warning Temperature Time: 0 minutes 00:24:23.426 Critical Temperature Time: 0 minutes 00:24:23.426 00:24:23.426 Number of Queues 00:24:23.426 ================ 00:24:23.426 Number of I/O Submission Queues: 127 00:24:23.426 Number of I/O Completion Queues: 127 00:24:23.426 00:24:23.426 Active Namespaces 00:24:23.426 ================= 00:24:23.426 Namespace ID:1 00:24:23.426 Error Recovery Timeout: Unlimited 00:24:23.426 Command Set Identifier: NVM (00h) 00:24:23.426 Deallocate: Supported 00:24:23.426 Deallocated/Unwritten Error: Not Supported 00:24:23.426 Deallocated Read Value: Unknown 00:24:23.426 Deallocate in Write Zeroes: Not Supported 00:24:23.426 Deallocated Guard Field: 0xFFFF 00:24:23.426 Flush: Supported 00:24:23.426 Reservation: Supported 00:24:23.426 Namespace Sharing Capabilities: Multiple Controllers 00:24:23.426 Size (in LBAs): 131072 (0GiB) 00:24:23.426 Capacity (in LBAs): 131072 (0GiB) 00:24:23.426 Utilization (in LBAs): 131072 (0GiB) 00:24:23.426 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:23.426 EUI64: ABCDEF0123456789 00:24:23.426 UUID: c52487b0-6ac5-4521-8cab-be1b44446586 00:24:23.426 Thin Provisioning: Not Supported 00:24:23.426 Per-NS Atomic Units: Yes 00:24:23.426 Atomic Boundary Size (Normal): 0 00:24:23.426 Atomic Boundary Size (PFail): 0 00:24:23.426 Atomic Boundary Offset: 0 00:24:23.426 Maximum Single Source Range Length: 65535 00:24:23.426 Maximum Copy Length: 65535 00:24:23.426 Maximum Source Range Count: 1 00:24:23.426 NGUID/EUI64 Never Reused: No 00:24:23.426 Namespace Write Protected: No 00:24:23.426 Number of LBA Formats: 1 00:24:23.426 Current LBA Format: LBA Format #00 00:24:23.426 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:23.426 00:24:23.426 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:23.426 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.426 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.426 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.426 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.426 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:23.426 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:23.426 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:23.426 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:23.684 rmmod nvme_tcp 00:24:23.684 rmmod nvme_fabrics 00:24:23.684 rmmod nvme_keyring 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 886991 ']' 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 886991 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 886991 ']' 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 886991 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 886991 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 886991' 00:24:23.684 killing process with pid 886991 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 886991 00:24:23.684 09:00:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 886991 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.942 09:00:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.844 09:00:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:25.844 00:24:25.844 real 0m5.725s 00:24:25.844 user 0m5.030s 00:24:25.844 sys 0m2.037s 00:24:25.844 09:00:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.844 09:00:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:25.844 ************************************ 00:24:25.844 END TEST nvmf_identify 00:24:25.844 ************************************ 00:24:25.844 09:00:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:25.844 09:00:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:25.844 09:00:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.844 09:00:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.103 ************************************ 00:24:26.103 START TEST nvmf_perf 00:24:26.103 ************************************ 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:26.103 * Looking for test storage... 00:24:26.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # lcov --version 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:26.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.103 --rc genhtml_branch_coverage=1 00:24:26.103 --rc genhtml_function_coverage=1 00:24:26.103 --rc genhtml_legend=1 00:24:26.103 --rc geninfo_all_blocks=1 00:24:26.103 --rc geninfo_unexecuted_blocks=1 00:24:26.103 00:24:26.103 ' 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:26.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.103 --rc genhtml_branch_coverage=1 00:24:26.103 --rc genhtml_function_coverage=1 00:24:26.103 --rc genhtml_legend=1 00:24:26.103 --rc geninfo_all_blocks=1 00:24:26.103 --rc geninfo_unexecuted_blocks=1 00:24:26.103 00:24:26.103 ' 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:26.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.103 --rc genhtml_branch_coverage=1 00:24:26.103 --rc genhtml_function_coverage=1 00:24:26.103 --rc genhtml_legend=1 00:24:26.103 --rc geninfo_all_blocks=1 00:24:26.103 --rc geninfo_unexecuted_blocks=1 00:24:26.103 00:24:26.103 ' 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:26.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:26.103 --rc genhtml_branch_coverage=1 00:24:26.103 --rc genhtml_function_coverage=1 00:24:26.103 --rc genhtml_legend=1 00:24:26.103 --rc geninfo_all_blocks=1 00:24:26.103 --rc geninfo_unexecuted_blocks=1 00:24:26.103 00:24:26.103 ' 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.103 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:26.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:26.104 09:00:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:28.638 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:28.638 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.638 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:28.638 Found net devices under 0000:09:00.0: cvl_0_0 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:28.639 Found net devices under 0000:09:00.1: cvl_0_1 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:28.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:24:28.639 00:24:28.639 --- 10.0.0.2 ping statistics --- 00:24:28.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.639 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:24:28.639 00:24:28.639 --- 10.0.0.1 ping statistics --- 00:24:28.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.639 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=889077 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 889077 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 889077 ']' 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.639 [2024-11-06 09:00:41.520770] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:24:28.639 [2024-11-06 09:00:41.520883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.639 [2024-11-06 09:00:41.593144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.639 [2024-11-06 09:00:41.653811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.639 [2024-11-06 09:00:41.653913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.639 [2024-11-06 09:00:41.653929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.639 [2024-11-06 09:00:41.653941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.639 [2024-11-06 09:00:41.653951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.639 [2024-11-06 09:00:41.655537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.639 [2024-11-06 09:00:41.655588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.639 [2024-11-06 09:00:41.655637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.639 [2024-11-06 09:00:41.655640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:28.639 09:00:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:31.931 09:00:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:31.931 09:00:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:31.931 09:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:24:31.931 09:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:32.495 09:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:32.495 09:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:24:32.495 09:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:32.495 09:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:32.495 09:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:32.752 [2024-11-06 09:00:45.827934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.752 09:00:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.009 09:00:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:33.009 09:00:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.266 09:00:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:33.266 09:00:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:33.524 09:00:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.782 [2024-11-06 09:00:46.911878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.782 09:00:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:34.039 09:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:24:34.039 09:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:24:34.039 09:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:34.039 09:00:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:24:35.410 Initializing NVMe Controllers 00:24:35.410 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:24:35.410 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:24:35.410 Initialization complete. Launching workers. 00:24:35.410 ======================================================== 00:24:35.410 Latency(us) 00:24:35.410 Device Information : IOPS MiB/s Average min max 00:24:35.410 PCIE (0000:0b:00.0) NSID 1 from core 0: 85131.31 332.54 375.35 16.32 7308.05 00:24:35.410 ======================================================== 00:24:35.410 Total : 85131.31 332.54 375.35 16.32 7308.05 00:24:35.410 00:24:35.410 09:00:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:36.781 Initializing NVMe Controllers 00:24:36.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:36.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:36.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:36.781 Initialization complete. Launching workers. 00:24:36.781 ======================================================== 00:24:36.781 Latency(us) 00:24:36.781 Device Information : IOPS MiB/s Average min max 00:24:36.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 137.51 0.54 7445.18 149.33 45839.62 00:24:36.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 76.73 0.30 13135.64 6984.21 47901.60 00:24:36.781 ======================================================== 00:24:36.781 Total : 214.24 0.84 9483.16 149.33 47901.60 00:24:36.781 00:24:36.781 09:00:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:37.713 Initializing NVMe Controllers 00:24:37.713 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:37.713 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:37.713 Initialization complete. Launching workers. 00:24:37.713 ======================================================== 00:24:37.713 Latency(us) 00:24:37.713 Device Information : IOPS MiB/s Average min max 00:24:37.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8167.00 31.90 3919.13 687.01 11257.05 00:24:37.713 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3709.00 14.49 8697.71 4977.73 18847.30 00:24:37.713 ======================================================== 00:24:37.713 Total : 11876.00 46.39 5411.53 687.01 18847.30 00:24:37.713 00:24:37.713 09:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:37.713 09:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:37.713 09:00:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:40.239 Initializing NVMe Controllers 00:24:40.239 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.239 Controller IO queue size 128, less than required. 00:24:40.239 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.239 Controller IO queue size 128, less than required. 00:24:40.239 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:40.239 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:40.239 Initialization complete. Launching workers. 00:24:40.239 ======================================================== 00:24:40.239 Latency(us) 00:24:40.239 Device Information : IOPS MiB/s Average min max 00:24:40.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1779.91 444.98 73066.38 55095.27 126151.52 00:24:40.239 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 572.83 143.21 230737.06 111505.14 350612.13 00:24:40.239 ======================================================== 00:24:40.239 Total : 2352.74 588.19 111454.89 55095.27 350612.13 00:24:40.239 00:24:40.239 09:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:40.496 No valid NVMe controllers or AIO or URING devices found 00:24:40.496 Initializing NVMe Controllers 00:24:40.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:40.496 Controller IO queue size 128, less than required. 00:24:40.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.496 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:40.496 Controller IO queue size 128, less than required. 00:24:40.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:40.496 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:40.496 WARNING: Some requested NVMe devices were skipped 00:24:40.496 09:00:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:43.775 Initializing NVMe Controllers 00:24:43.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.775 Controller IO queue size 128, less than required. 00:24:43.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.775 Controller IO queue size 128, less than required. 00:24:43.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:43.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:43.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:43.775 Initialization complete. Launching workers. 00:24:43.775 00:24:43.775 ==================== 00:24:43.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:43.775 TCP transport: 00:24:43.775 polls: 9713 00:24:43.775 idle_polls: 6616 00:24:43.775 sock_completions: 3097 00:24:43.775 nvme_completions: 5463 00:24:43.775 submitted_requests: 8164 00:24:43.775 queued_requests: 1 00:24:43.775 00:24:43.775 ==================== 00:24:43.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:43.775 TCP transport: 00:24:43.775 polls: 10634 00:24:43.775 idle_polls: 7384 00:24:43.775 sock_completions: 3250 00:24:43.775 nvme_completions: 5757 00:24:43.775 submitted_requests: 8564 00:24:43.775 queued_requests: 1 00:24:43.775 ======================================================== 00:24:43.775 Latency(us) 00:24:43.775 Device Information : IOPS MiB/s Average min max 00:24:43.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1364.57 341.14 95606.86 62881.85 152183.31 00:24:43.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1438.02 359.50 90876.35 52490.85 137209.59 00:24:43.775 ======================================================== 00:24:43.775 Total : 2802.58 700.65 93179.62 52490.85 152183.31 00:24:43.775 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:43.775 rmmod nvme_tcp 00:24:43.775 rmmod nvme_fabrics 00:24:43.775 rmmod nvme_keyring 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 889077 ']' 00:24:43.775 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 889077 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 889077 ']' 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 889077 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 889077 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 889077' 00:24:43.776 killing process with pid 889077 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 889077 00:24:43.776 09:00:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 889077 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.147 09:00:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.058 09:01:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:47.058 00:24:47.058 real 0m21.124s 00:24:47.058 user 1m5.103s 00:24:47.058 sys 0m5.537s 00:24:47.058 09:01:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:47.058 09:01:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:47.058 ************************************ 00:24:47.058 END TEST nvmf_perf 00:24:47.058 ************************************ 00:24:47.058 09:01:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:47.058 09:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:47.058 09:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:47.058 09:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.058 ************************************ 00:24:47.058 START TEST nvmf_fio_host 00:24:47.058 ************************************ 00:24:47.058 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:47.317 * Looking for test storage... 00:24:47.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # lcov --version 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:47.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.317 --rc genhtml_branch_coverage=1 00:24:47.317 --rc genhtml_function_coverage=1 00:24:47.317 --rc genhtml_legend=1 00:24:47.317 --rc geninfo_all_blocks=1 00:24:47.317 --rc geninfo_unexecuted_blocks=1 00:24:47.317 00:24:47.317 ' 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:47.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.317 --rc genhtml_branch_coverage=1 00:24:47.317 --rc genhtml_function_coverage=1 00:24:47.317 --rc genhtml_legend=1 00:24:47.317 --rc geninfo_all_blocks=1 00:24:47.317 --rc geninfo_unexecuted_blocks=1 00:24:47.317 00:24:47.317 ' 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:47.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.317 --rc genhtml_branch_coverage=1 00:24:47.317 --rc genhtml_function_coverage=1 00:24:47.317 --rc genhtml_legend=1 00:24:47.317 --rc geninfo_all_blocks=1 00:24:47.317 --rc geninfo_unexecuted_blocks=1 00:24:47.317 00:24:47.317 ' 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:47.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:47.317 --rc genhtml_branch_coverage=1 00:24:47.317 --rc genhtml_function_coverage=1 00:24:47.317 --rc genhtml_legend=1 00:24:47.317 --rc geninfo_all_blocks=1 00:24:47.317 --rc geninfo_unexecuted_blocks=1 00:24:47.317 00:24:47.317 ' 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:47.317 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:47.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:47.318 09:01:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:49.849 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:49.849 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:49.849 Found net devices under 0000:09:00.0: cvl_0_0 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:49.849 Found net devices under 0000:09:00.1: cvl_0_1 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:24:49.849 00:24:49.849 --- 10.0.0.2 ping statistics --- 00:24:49.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.849 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:24:49.849 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:24:49.849 00:24:49.849 --- 10.0.0.1 ping statistics --- 00:24:49.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.849 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=893039 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 893039 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 893039 ']' 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:49.850 09:01:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.850 [2024-11-06 09:01:02.818130] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:24:49.850 [2024-11-06 09:01:02.818230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.850 [2024-11-06 09:01:02.889032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.850 [2024-11-06 09:01:02.948501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.850 [2024-11-06 09:01:02.948546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.850 [2024-11-06 09:01:02.948573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.850 [2024-11-06 09:01:02.948584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.850 [2024-11-06 09:01:02.948593] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.850 [2024-11-06 09:01:02.950148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.850 [2024-11-06 09:01:02.950211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.850 [2024-11-06 09:01:02.950279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:49.850 [2024-11-06 09:01:02.950281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.850 09:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:49.850 09:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:49.850 09:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:50.415 [2024-11-06 09:01:03.410694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.415 09:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:50.415 09:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:50.415 09:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.415 09:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:50.672 Malloc1 00:24:50.673 09:01:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:50.931 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:51.188 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.445 [2024-11-06 09:01:04.534028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.445 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:51.712 09:01:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:52.032 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:52.032 fio-3.35 00:24:52.032 Starting 1 thread 00:24:54.558 00:24:54.558 test: (groupid=0, jobs=1): err= 0: pid=893410: Wed Nov 6 09:01:07 2024 00:24:54.558 read: IOPS=8282, BW=32.4MiB/s (33.9MB/s)(64.9MiB/2007msec) 00:24:54.558 slat (nsec): min=1930, max=105399, avg=2495.17, stdev=1385.74 00:24:54.558 clat (usec): min=2523, max=13512, avg=8411.66, stdev=712.52 00:24:54.558 lat (usec): min=2547, max=13514, avg=8414.15, stdev=712.45 00:24:54.558 clat percentiles (usec): 00:24:54.558 | 1.00th=[ 6849], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7832], 00:24:54.558 | 30.00th=[ 8029], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:24:54.558 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:24:54.558 | 99.00th=[10028], 99.50th=[10159], 99.90th=[11731], 99.95th=[12911], 00:24:54.558 | 99.99th=[13435] 00:24:54.558 bw ( KiB/s): min=32752, max=33784, per=99.92%, avg=33100.00, stdev=465.22, samples=4 00:24:54.558 iops : min= 8188, max= 8446, avg=8275.00, stdev=116.30, samples=4 00:24:54.558 write: IOPS=8279, BW=32.3MiB/s (33.9MB/s)(64.9MiB/2007msec); 0 zone resets 00:24:54.558 slat (nsec): min=2070, max=88079, avg=2598.74, stdev=1119.84 00:24:54.558 clat (usec): min=926, max=13222, avg=6982.30, stdev=577.10 00:24:54.558 lat (usec): min=932, max=13224, avg=6984.90, stdev=577.07 00:24:54.558 clat percentiles (usec): 00:24:54.558 | 1.00th=[ 5669], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6587], 00:24:54.558 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:24:54.558 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 7832], 00:24:54.558 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[11600], 99.95th=[11994], 00:24:54.558 | 99.99th=[13173] 00:24:54.558 bw ( KiB/s): min=32632, max=33624, per=100.00%, avg=33124.00, stdev=405.85, samples=4 00:24:54.558 iops : min= 8158, max= 8406, avg=8281.00, stdev=101.46, samples=4 00:24:54.558 lat (usec) : 1000=0.01% 00:24:54.558 lat (msec) : 2=0.02%, 4=0.10%, 10=99.34%, 20=0.53% 00:24:54.558 cpu : usr=63.46%, sys=34.95%, ctx=73, majf=0, minf=36 00:24:54.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:54.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:54.558 issued rwts: total=16622,16617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.558 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:54.558 00:24:54.558 Run status group 0 (all jobs): 00:24:54.558 READ: bw=32.4MiB/s (33.9MB/s), 32.4MiB/s-32.4MiB/s (33.9MB/s-33.9MB/s), io=64.9MiB (68.1MB), run=2007-2007msec 00:24:54.558 WRITE: bw=32.3MiB/s (33.9MB/s), 32.3MiB/s-32.3MiB/s (33.9MB/s-33.9MB/s), io=64.9MiB (68.1MB), run=2007-2007msec 00:24:54.558 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:54.559 09:01:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:54.559 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:54.559 fio-3.35 00:24:54.559 Starting 1 thread 00:24:57.090 00:24:57.090 test: (groupid=0, jobs=1): err= 0: pid=893741: Wed Nov 6 09:01:09 2024 00:24:57.090 read: IOPS=7696, BW=120MiB/s (126MB/s)(242MiB/2011msec) 00:24:57.090 slat (nsec): min=2810, max=94022, avg=3774.47, stdev=1939.59 00:24:57.090 clat (usec): min=1948, max=51510, avg=9529.92, stdev=4166.50 00:24:57.090 lat (usec): min=1952, max=51514, avg=9533.69, stdev=4166.48 00:24:57.090 clat percentiles (usec): 00:24:57.090 | 1.00th=[ 4817], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7242], 00:24:57.090 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9634], 00:24:57.090 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12387], 95.00th=[13960], 00:24:57.090 | 99.00th=[16909], 99.50th=[46400], 99.90th=[50594], 99.95th=[51119], 00:24:57.090 | 99.99th=[51643] 00:24:57.090 bw ( KiB/s): min=59104, max=66592, per=51.77%, avg=63752.00, stdev=3415.80, samples=4 00:24:57.090 iops : min= 3694, max= 4162, avg=3984.50, stdev=213.49, samples=4 00:24:57.090 write: IOPS=4589, BW=71.7MiB/s (75.2MB/s)(131MiB/1820msec); 0 zone resets 00:24:57.090 slat (usec): min=30, max=194, avg=34.74, stdev= 6.73 00:24:57.090 clat (usec): min=6971, max=20656, avg=12305.59, stdev=2147.65 00:24:57.091 lat (usec): min=7008, max=20718, avg=12340.34, stdev=2147.82 00:24:57.091 clat percentiles (usec): 00:24:57.091 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10421], 00:24:57.091 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12649], 00:24:57.091 | 70.00th=[13304], 80.00th=[13960], 90.00th=[15270], 95.00th=[16319], 00:24:57.091 | 99.00th=[18220], 99.50th=[19006], 99.90th=[19530], 99.95th=[19530], 00:24:57.091 | 99.99th=[20579] 00:24:57.091 bw ( KiB/s): min=61216, max=69280, per=90.53%, avg=66472.00, stdev=3659.25, samples=4 00:24:57.091 iops : min= 3826, max= 4330, avg=4154.50, stdev=228.70, samples=4 00:24:57.091 lat (msec) : 2=0.01%, 4=0.12%, 10=48.64%, 20=50.70%, 50=0.43% 00:24:57.091 lat (msec) : 100=0.10% 00:24:57.091 cpu : usr=74.33%, sys=24.33%, ctx=54, majf=0, minf=58 00:24:57.091 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:57.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:57.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:57.091 issued rwts: total=15478,8352,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:57.091 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:57.091 00:24:57.091 Run status group 0 (all jobs): 00:24:57.091 READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=242MiB (254MB), run=2011-2011msec 00:24:57.091 WRITE: bw=71.7MiB/s (75.2MB/s), 71.7MiB/s-71.7MiB/s (75.2MB/s-75.2MB/s), io=131MiB (137MB), run=1820-1820msec 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.091 rmmod nvme_tcp 00:24:57.091 rmmod nvme_fabrics 00:24:57.091 rmmod nvme_keyring 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 893039 ']' 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 893039 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 893039 ']' 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 893039 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 893039 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 893039' 00:24:57.091 killing process with pid 893039 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 893039 00:24:57.091 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 893039 00:24:57.348 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.349 09:01:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.892 00:24:59.892 real 0m12.341s 00:24:59.892 user 0m36.431s 00:24:59.892 sys 0m4.123s 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.892 ************************************ 00:24:59.892 END TEST nvmf_fio_host 00:24:59.892 ************************************ 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.892 ************************************ 00:24:59.892 START TEST nvmf_failover 00:24:59.892 ************************************ 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:59.892 * Looking for test storage... 00:24:59.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # lcov --version 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.892 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:24:59.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.893 --rc genhtml_branch_coverage=1 00:24:59.893 --rc genhtml_function_coverage=1 00:24:59.893 --rc genhtml_legend=1 00:24:59.893 --rc geninfo_all_blocks=1 00:24:59.893 --rc geninfo_unexecuted_blocks=1 00:24:59.893 00:24:59.893 ' 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:24:59.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.893 --rc genhtml_branch_coverage=1 00:24:59.893 --rc genhtml_function_coverage=1 00:24:59.893 --rc genhtml_legend=1 00:24:59.893 --rc geninfo_all_blocks=1 00:24:59.893 --rc geninfo_unexecuted_blocks=1 00:24:59.893 00:24:59.893 ' 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:24:59.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.893 --rc genhtml_branch_coverage=1 00:24:59.893 --rc genhtml_function_coverage=1 00:24:59.893 --rc genhtml_legend=1 00:24:59.893 --rc geninfo_all_blocks=1 00:24:59.893 --rc geninfo_unexecuted_blocks=1 00:24:59.893 00:24:59.893 ' 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:24:59.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.893 --rc genhtml_branch_coverage=1 00:24:59.893 --rc genhtml_function_coverage=1 00:24:59.893 --rc genhtml_legend=1 00:24:59.893 --rc geninfo_all_blocks=1 00:24:59.893 --rc geninfo_unexecuted_blocks=1 00:24:59.893 00:24:59.893 ' 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:59.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:59.893 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:59.894 09:01:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:01.795 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.795 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:01.796 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:01.796 Found net devices under 0000:09:00.0: cvl_0_0 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:01.796 Found net devices under 0000:09:00.1: cvl_0_1 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.796 09:01:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:25:01.796 00:25:01.796 --- 10.0.0.2 ping statistics --- 00:25:01.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.796 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:25:01.796 00:25:01.796 --- 10.0.0.1 ping statistics --- 00:25:01.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.796 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:01.796 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=896063 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 896063 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 896063 ']' 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:02.055 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.055 [2024-11-06 09:01:15.152142] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:25:02.055 [2024-11-06 09:01:15.152235] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.055 [2024-11-06 09:01:15.223759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:02.055 [2024-11-06 09:01:15.276542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.055 [2024-11-06 09:01:15.276595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.055 [2024-11-06 09:01:15.276623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.055 [2024-11-06 09:01:15.276633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.055 [2024-11-06 09:01:15.276642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.055 [2024-11-06 09:01:15.278096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.055 [2024-11-06 09:01:15.278153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.055 [2024-11-06 09:01:15.278171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.313 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:02.313 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:02.313 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:02.313 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:02.313 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.313 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.313 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:02.570 [2024-11-06 09:01:15.665159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.570 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:02.828 Malloc0 00:25:02.828 09:01:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:03.085 09:01:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:03.343 09:01:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.601 [2024-11-06 09:01:16.773507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.601 09:01:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:03.858 [2024-11-06 09:01:17.038309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:03.858 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:04.116 [2024-11-06 09:01:17.327271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=896354 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 896354 /var/tmp/bdevperf.sock 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 896354 ']' 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:04.116 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:04.374 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:04.374 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:04.374 09:01:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:04.938 NVMe0n1 00:25:04.938 09:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:05.196 00:25:05.196 09:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=896445 00:25:05.196 09:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:05.196 09:01:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:06.129 09:01:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.387 [2024-11-06 09:01:19.651090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207dc70 is same with the state(6) to be set 00:25:06.387 09:01:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:09.665 09:01:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:09.922 00:25:09.922 09:01:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:10.180 09:01:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:13.457 09:01:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.457 [2024-11-06 09:01:26.586038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.457 09:01:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:14.389 09:01:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:14.646 [2024-11-06 09:01:27.866194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207fb60 is same with the state(6) to be set 00:25:14.646 [2024-11-06 09:01:27.866257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207fb60 is same with the state(6) to be set 00:25:14.646 [2024-11-06 09:01:27.866273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207fb60 is same with the state(6) to be set 00:25:14.646 [2024-11-06 09:01:27.866285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207fb60 is same with the state(6) to be set 00:25:14.646 [2024-11-06 09:01:27.866297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207fb60 is same with the state(6) to be set 00:25:14.646 [2024-11-06 09:01:27.866309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207fb60 is same with the state(6) to be set 00:25:14.646 [2024-11-06 09:01:27.866321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207fb60 is same with the state(6) to be set 00:25:14.646 [2024-11-06 09:01:27.866333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207fb60 is same with the state(6) to be set 00:25:14.646 09:01:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 896445 00:25:21.210 { 00:25:21.210 "results": [ 00:25:21.210 { 00:25:21.210 "job": "NVMe0n1", 00:25:21.210 "core_mask": "0x1", 00:25:21.210 "workload": "verify", 00:25:21.210 "status": "finished", 00:25:21.210 "verify_range": { 00:25:21.210 "start": 0, 00:25:21.210 "length": 16384 00:25:21.210 }, 00:25:21.210 "queue_depth": 128, 00:25:21.210 "io_size": 4096, 00:25:21.210 "runtime": 15.015363, 00:25:21.210 "iops": 8489.971238124579, 00:25:21.210 "mibps": 33.16395014892414, 00:25:21.210 "io_failed": 5245, 00:25:21.210 "io_timeout": 0, 00:25:21.210 "avg_latency_us": 14452.22809119943, 00:25:21.210 "min_latency_us": 825.2681481481482, 00:25:21.210 "max_latency_us": 16990.814814814814 00:25:21.210 } 00:25:21.210 ], 00:25:21.210 "core_count": 1 00:25:21.210 } 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 896354 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 896354 ']' 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 896354 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 896354 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 896354' 00:25:21.210 killing process with pid 896354 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 896354 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 896354 00:25:21.210 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:21.210 [2024-11-06 09:01:17.390810] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:25:21.211 [2024-11-06 09:01:17.390919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid896354 ] 00:25:21.211 [2024-11-06 09:01:17.459086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.211 [2024-11-06 09:01:17.518292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.211 Running I/O for 15 seconds... 00:25:21.211 8527.00 IOPS, 33.31 MiB/s [2024-11-06T08:01:34.500Z] [2024-11-06 09:01:19.651652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.211 [2024-11-06 09:01:19.651695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.651724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.651741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.651758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.651773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.651789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.651804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.651820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.651863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.651902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.651918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.651932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.651948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.651962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.651977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.651992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.211 [2024-11-06 09:01:19.652462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.211 [2024-11-06 09:01:19.652492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.211 [2024-11-06 09:01:19.652522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.211 [2024-11-06 09:01:19.652550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.211 [2024-11-06 09:01:19.652579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.211 [2024-11-06 09:01:19.652608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.211 [2024-11-06 09:01:19.652623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.652983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.652998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.212 [2024-11-06 09:01:19.653671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.653700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.653729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.653757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.653785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.653828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.212 [2024-11-06 09:01:19.653853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.212 [2024-11-06 09:01:19.653867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.653888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.653902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.653917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.653930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.653946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.653960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.653976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.653990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.213 [2024-11-06 09:01:19.654856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.213 [2024-11-06 09:01:19.654882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.654898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.654913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.654933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.654950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.654965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.654981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.654995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.214 [2024-11-06 09:01:19.655652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.214 [2024-11-06 09:01:19.655681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.214 [2024-11-06 09:01:19.655727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.214 [2024-11-06 09:01:19.655740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80544 len:8 PRP1 0x0 PRP2 0x0 00:25:21.214 [2024-11-06 09:01:19.655753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655821] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:21.214 [2024-11-06 09:01:19.655864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.214 [2024-11-06 09:01:19.655907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.214 [2024-11-06 09:01:19.655937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.214 [2024-11-06 09:01:19.655965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.655979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.214 [2024-11-06 09:01:19.655992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:19.656014] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:21.214 [2024-11-06 09:01:19.659366] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:21.214 [2024-11-06 09:01:19.659405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f7560 (9): Bad file descriptor 00:25:21.214 [2024-11-06 09:01:19.730939] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:21.214 8183.00 IOPS, 31.96 MiB/s [2024-11-06T08:01:34.503Z] 8328.00 IOPS, 32.53 MiB/s [2024-11-06T08:01:34.503Z] 8431.75 IOPS, 32.94 MiB/s [2024-11-06T08:01:34.503Z] [2024-11-06 09:01:23.318706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.214 [2024-11-06 09:01:23.318772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:23.318804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.214 [2024-11-06 09:01:23.318821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.214 [2024-11-06 09:01:23.318915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.214 [2024-11-06 09:01:23.318933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.318966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.318981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.318996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.215 [2024-11-06 09:01:23.319860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.215 [2024-11-06 09:01:23.319877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.319893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.319907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.319922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.319936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.319952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.319966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.319982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.319996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.216 [2024-11-06 09:01:23.320413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.216 [2024-11-06 09:01:23.320441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.216 [2024-11-06 09:01:23.320471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.216 [2024-11-06 09:01:23.320500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.216 [2024-11-06 09:01:23.320534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.216 [2024-11-06 09:01:23.320564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.216 [2024-11-06 09:01:23.320593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.216 [2024-11-06 09:01:23.320916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.216 [2024-11-06 09:01:23.320931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.320950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.320966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.320983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.320998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.217 [2024-11-06 09:01:23.321639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.321973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.321987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.322003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.322017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.322033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.322048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.322063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.322078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.322094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.322119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.322134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.322163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.322179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.322194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.217 [2024-11-06 09:01:23.322213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.217 [2024-11-06 09:01:23.322228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:23.322955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.322970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x819ec0 is same with the state(6) to be set 00:25:21.218 [2024-11-06 09:01:23.322987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.218 [2024-11-06 09:01:23.322998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.218 [2024-11-06 09:01:23.323018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85712 len:8 PRP1 0x0 PRP2 0x0 00:25:21.218 [2024-11-06 09:01:23.323032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.323096] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:21.218 [2024-11-06 09:01:23.323135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.218 [2024-11-06 09:01:23.323156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.323172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.218 [2024-11-06 09:01:23.323186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.323200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.218 [2024-11-06 09:01:23.323213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.323227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.218 [2024-11-06 09:01:23.323241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:23.323255] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:21.218 [2024-11-06 09:01:23.323313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f7560 (9): Bad file descriptor 00:25:21.218 [2024-11-06 09:01:23.326605] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:21.218 [2024-11-06 09:01:23.360045] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:21.218 8378.00 IOPS, 32.73 MiB/s [2024-11-06T08:01:34.507Z] 8443.67 IOPS, 32.98 MiB/s [2024-11-06T08:01:34.507Z] 8496.86 IOPS, 33.19 MiB/s [2024-11-06T08:01:34.507Z] 8531.25 IOPS, 33.33 MiB/s [2024-11-06T08:01:34.507Z] 8556.44 IOPS, 33.42 MiB/s [2024-11-06T08:01:34.507Z] [2024-11-06 09:01:27.867232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.218 [2024-11-06 09:01:27.867275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:27.867302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.218 [2024-11-06 09:01:27.867318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:27.867335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.218 [2024-11-06 09:01:27.867349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:27.867364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.218 [2024-11-06 09:01:27.867393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:27.867408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.218 [2024-11-06 09:01:27.867423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:27.867449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.218 [2024-11-06 09:01:27.867464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.218 [2024-11-06 09:01:27.867479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.218 [2024-11-06 09:01:27.867493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.867975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.867991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.219 [2024-11-06 09:01:27.868723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.219 [2024-11-06 09:01:27.868739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.868754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.868769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.868784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.868799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.868813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.868855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.868886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.868904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.868919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.868935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.868950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.868966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.868980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.868996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.220 [2024-11-06 09:01:27.869108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.220 [2024-11-06 09:01:27.869152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.220 [2024-11-06 09:01:27.869182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.220 [2024-11-06 09:01:27.869211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.220 [2024-11-06 09:01:27.869241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.220 [2024-11-06 09:01:27.869272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.220 [2024-11-06 09:01:27.869304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.220 [2024-11-06 09:01:27.869333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.220 [2024-11-06 09:01:27.869364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.220 [2024-11-06 09:01:27.869782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.220 [2024-11-06 09:01:27.869796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.869811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.869827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.869866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.869886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.869902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.869917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.869932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.869947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.869963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.869978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.869994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.221 [2024-11-06 09:01:27.870387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18248 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18256 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18264 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870576] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18280 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18288 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18296 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18312 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18320 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.870955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.870967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18328 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.870980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.870995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.871006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.221 [2024-11-06 09:01:27.871018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:8 PRP1 0x0 PRP2 0x0 00:25:21.221 [2024-11-06 09:01:27.871031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.221 [2024-11-06 09:01:27.871045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.221 [2024-11-06 09:01:27.871056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18344 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18352 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18360 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18376 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18384 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18392 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18408 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18416 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18424 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18440 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18448 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18456 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18472 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.871955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.871967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.871978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18480 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.871992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.872005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.222 [2024-11-06 09:01:27.872017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.222 [2024-11-06 09:01:27.872028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18488 len:8 PRP1 0x0 PRP2 0x0 00:25:21.222 [2024-11-06 09:01:27.872041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.872117] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:21.222 [2024-11-06 09:01:27.872170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.222 [2024-11-06 09:01:27.872189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.872220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.222 [2024-11-06 09:01:27.872234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.872249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.222 [2024-11-06 09:01:27.872262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.872277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.222 [2024-11-06 09:01:27.872290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.222 [2024-11-06 09:01:27.872304] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:21.222 [2024-11-06 09:01:27.872361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f7560 (9): Bad file descriptor 00:25:21.222 [2024-11-06 09:01:27.875652] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:21.222 [2024-11-06 09:01:27.911179] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:21.222 8528.40 IOPS, 33.31 MiB/s [2024-11-06T08:01:34.511Z] 8520.91 IOPS, 33.28 MiB/s [2024-11-06T08:01:34.511Z] 8509.58 IOPS, 33.24 MiB/s [2024-11-06T08:01:34.511Z] 8501.23 IOPS, 33.21 MiB/s [2024-11-06T08:01:34.511Z] 8499.71 IOPS, 33.20 MiB/s [2024-11-06T08:01:34.511Z] 8490.13 IOPS, 33.16 MiB/s 00:25:21.222 Latency(us) 00:25:21.222 [2024-11-06T08:01:34.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.222 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:21.223 Verification LBA range: start 0x0 length 0x4000 00:25:21.223 NVMe0n1 : 15.02 8489.97 33.16 349.31 0.00 14452.23 825.27 16990.81 00:25:21.223 [2024-11-06T08:01:34.512Z] =================================================================================================================== 00:25:21.223 [2024-11-06T08:01:34.512Z] Total : 8489.97 33.16 349.31 0.00 14452.23 825.27 16990.81 00:25:21.223 Received shutdown signal, test time was about 15.000000 seconds 00:25:21.223 00:25:21.223 Latency(us) 00:25:21.223 [2024-11-06T08:01:34.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.223 [2024-11-06T08:01:34.512Z] =================================================================================================================== 00:25:21.223 [2024-11-06T08:01:34.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=898212 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 898212 /var/tmp/bdevperf.sock 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 898212 ']' 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:21.223 09:01:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.223 09:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:21.223 09:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:21.223 09:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:21.223 [2024-11-06 09:01:34.352452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:21.223 09:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:21.480 [2024-11-06 09:01:34.677407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:21.480 09:01:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:22.046 NVMe0n1 00:25:22.046 09:01:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:22.610 00:25:22.610 09:01:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:22.867 00:25:22.867 09:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.867 09:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:23.125 09:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.382 09:01:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:26.658 09:01:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.658 09:01:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:26.658 09:01:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=899001 00:25:26.658 09:01:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:26.658 09:01:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 899001 00:25:28.116 { 00:25:28.116 "results": [ 00:25:28.116 { 00:25:28.116 "job": "NVMe0n1", 00:25:28.116 "core_mask": "0x1", 00:25:28.116 "workload": "verify", 00:25:28.116 "status": "finished", 00:25:28.116 "verify_range": { 00:25:28.116 "start": 0, 00:25:28.116 "length": 16384 00:25:28.116 }, 00:25:28.116 "queue_depth": 128, 00:25:28.116 "io_size": 4096, 00:25:28.116 "runtime": 1.009512, 00:25:28.116 "iops": 8592.270324671723, 00:25:28.116 "mibps": 33.56355595574892, 00:25:28.116 "io_failed": 0, 00:25:28.116 "io_timeout": 0, 00:25:28.116 "avg_latency_us": 14813.883806351889, 00:25:28.116 "min_latency_us": 2985.528888888889, 00:25:28.116 "max_latency_us": 13010.10962962963 00:25:28.116 } 00:25:28.116 ], 00:25:28.116 "core_count": 1 00:25:28.116 } 00:25:28.116 09:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:28.116 [2024-11-06 09:01:33.851451] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:25:28.116 [2024-11-06 09:01:33.851542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid898212 ] 00:25:28.116 [2024-11-06 09:01:33.918760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.116 [2024-11-06 09:01:33.975431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.116 [2024-11-06 09:01:36.582148] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:28.116 [2024-11-06 09:01:36.582233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.116 [2024-11-06 09:01:36.582255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.116 [2024-11-06 09:01:36.582272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.116 [2024-11-06 09:01:36.582301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.116 [2024-11-06 09:01:36.582316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.116 [2024-11-06 09:01:36.582331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.116 [2024-11-06 09:01:36.582345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:28.116 [2024-11-06 09:01:36.582358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.116 [2024-11-06 09:01:36.582372] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:28.116 [2024-11-06 09:01:36.582421] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:28.116 [2024-11-06 09:01:36.582452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec0560 (9): Bad file descriptor 00:25:28.116 [2024-11-06 09:01:36.588809] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:28.116 Running I/O for 1 seconds... 00:25:28.116 8546.00 IOPS, 33.38 MiB/s 00:25:28.116 Latency(us) 00:25:28.116 [2024-11-06T08:01:41.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.116 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:28.116 Verification LBA range: start 0x0 length 0x4000 00:25:28.116 NVMe0n1 : 1.01 8592.27 33.56 0.00 0.00 14813.88 2985.53 13010.11 00:25:28.116 [2024-11-06T08:01:41.405Z] =================================================================================================================== 00:25:28.116 [2024-11-06T08:01:41.405Z] Total : 8592.27 33.56 0.00 0.00 14813.88 2985.53 13010.11 00:25:28.116 09:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:28.116 09:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:28.116 09:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.374 09:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:28.374 09:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:28.938 09:01:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.938 09:01:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:32.216 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:32.216 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:32.216 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 898212 00:25:32.216 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 898212 ']' 00:25:32.216 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 898212 00:25:32.216 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:32.216 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:32.216 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 898212 00:25:32.474 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:32.474 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:32.474 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 898212' 00:25:32.474 killing process with pid 898212 00:25:32.474 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 898212 00:25:32.474 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 898212 00:25:32.474 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:32.474 09:01:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.732 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:32.732 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:32.732 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:32.732 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:32.732 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:32.732 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.732 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:32.732 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.732 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.732 rmmod nvme_tcp 00:25:32.990 rmmod nvme_fabrics 00:25:32.990 rmmod nvme_keyring 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 896063 ']' 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 896063 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 896063 ']' 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 896063 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 896063 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 896063' 00:25:32.990 killing process with pid 896063 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 896063 00:25:32.990 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 896063 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.248 09:01:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.151 09:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:35.151 00:25:35.151 real 0m35.709s 00:25:35.151 user 2m6.587s 00:25:35.151 sys 0m5.738s 00:25:35.151 09:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:35.151 09:01:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:35.151 ************************************ 00:25:35.151 END TEST nvmf_failover 00:25:35.151 ************************************ 00:25:35.151 09:01:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:35.151 09:01:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:35.151 09:01:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:35.151 09:01:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.410 ************************************ 00:25:35.410 START TEST nvmf_host_discovery 00:25:35.410 ************************************ 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:35.410 * Looking for test storage... 00:25:35.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # lcov --version 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:25:35.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.410 --rc genhtml_branch_coverage=1 00:25:35.410 --rc genhtml_function_coverage=1 00:25:35.410 --rc genhtml_legend=1 00:25:35.410 --rc geninfo_all_blocks=1 00:25:35.410 --rc geninfo_unexecuted_blocks=1 00:25:35.410 00:25:35.410 ' 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:25:35.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.410 --rc genhtml_branch_coverage=1 00:25:35.410 --rc genhtml_function_coverage=1 00:25:35.410 --rc genhtml_legend=1 00:25:35.410 --rc geninfo_all_blocks=1 00:25:35.410 --rc geninfo_unexecuted_blocks=1 00:25:35.410 00:25:35.410 ' 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:25:35.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.410 --rc genhtml_branch_coverage=1 00:25:35.410 --rc genhtml_function_coverage=1 00:25:35.410 --rc genhtml_legend=1 00:25:35.410 --rc geninfo_all_blocks=1 00:25:35.410 --rc geninfo_unexecuted_blocks=1 00:25:35.410 00:25:35.410 ' 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:25:35.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.410 --rc genhtml_branch_coverage=1 00:25:35.410 --rc genhtml_function_coverage=1 00:25:35.410 --rc genhtml_legend=1 00:25:35.410 --rc geninfo_all_blocks=1 00:25:35.410 --rc geninfo_unexecuted_blocks=1 00:25:35.410 00:25:35.410 ' 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.410 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.411 09:01:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:37.941 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:37.941 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:37.941 Found net devices under 0000:09:00.0: cvl_0_0 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:37.941 Found net devices under 0000:09:00.1: cvl_0_1 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:25:37.941 00:25:37.941 --- 10.0.0.2 ping statistics --- 00:25:37.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.941 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:25:37.941 00:25:37.941 --- 10.0.0.1 ping statistics --- 00:25:37.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.941 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=901616 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 901616 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 901616 ']' 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.941 09:01:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.941 [2024-11-06 09:01:50.902239] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:25:37.941 [2024-11-06 09:01:50.902318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.941 [2024-11-06 09:01:50.976350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.941 [2024-11-06 09:01:51.032449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.941 [2024-11-06 09:01:51.032499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.941 [2024-11-06 09:01:51.032527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.941 [2024-11-06 09:01:51.032539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.941 [2024-11-06 09:01:51.032548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.942 [2024-11-06 09:01:51.033178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.942 [2024-11-06 09:01:51.166153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.942 [2024-11-06 09:01:51.174378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.942 null0 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.942 null1 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=901649 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 901649 /tmp/host.sock 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 901649 ']' 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:37.942 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.942 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.199 [2024-11-06 09:01:51.249668] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:25:38.200 [2024-11-06 09:01:51.249733] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid901649 ] 00:25:38.200 [2024-11-06 09:01:51.315849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.200 [2024-11-06 09:01:51.372201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.200 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.200 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:38.200 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.200 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:38.200 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.200 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:38.458 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:38.459 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.459 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.459 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.459 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.459 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.459 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.459 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.717 [2024-11-06 09:01:51.767930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:38.717 09:01:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:39.282 [2024-11-06 09:01:52.543970] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:39.282 [2024-11-06 09:01:52.543993] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:39.282 [2024-11-06 09:01:52.544015] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:39.540 [2024-11-06 09:01:52.630335] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:39.540 [2024-11-06 09:01:52.692073] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:39.540 [2024-11-06 09:01:52.692944] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5c9f50:1 started. 00:25:39.540 [2024-11-06 09:01:52.694632] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:39.540 [2024-11-06 09:01:52.694651] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:39.540 [2024-11-06 09:01:52.702111] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5c9f50 was disconnected and freed. delete nvme_qpair. 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:39.799 09:01:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:39.799 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:40.057 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.057 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:40.057 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:40.057 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.058 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.316 [2024-11-06 09:01:53.351686] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x5ca2d0:1 started. 00:25:40.316 [2024-11-06 09:01:53.354731] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x5ca2d0 was disconnected and freed. delete nvme_qpair. 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.316 [2024-11-06 09:01:53.420779] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:40.316 [2024-11-06 09:01:53.421332] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:40.316 [2024-11-06 09:01:53.421360] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.316 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.317 [2024-11-06 09:01:53.507007] bdev_nvme.c:7215:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:40.317 09:01:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:40.575 [2024-11-06 09:01:53.608904] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:40.575 [2024-11-06 09:01:53.608966] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:40.575 [2024-11-06 09:01:53.608983] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:40.575 [2024-11-06 09:01:53.608992] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:41.510 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.511 [2024-11-06 09:01:54.641003] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:41.511 [2024-11-06 09:01:54.641043] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:41.511 [2024-11-06 09:01:54.646424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.511 [2024-11-06 09:01:54.646463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.511 [2024-11-06 09:01:54.646503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.511 [2024-11-06 09:01:54.646516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.511 [2024-11-06 09:01:54.646529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.511 [2024-11-06 09:01:54.646557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.511 [2024-11-06 09:01:54.646572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:41.511 [2024-11-06 09:01:54.646590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:41.511 [2024-11-06 09:01:54.646603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:41.511 [2024-11-06 09:01:54.656410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.511 [2024-11-06 09:01:54.666450] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.511 [2024-11-06 09:01:54.666472] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.511 [2024-11-06 09:01:54.666482] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.511 [2024-11-06 09:01:54.666490] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.511 [2024-11-06 09:01:54.666536] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.511 [2024-11-06 09:01:54.666800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.511 [2024-11-06 09:01:54.666829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.511 [2024-11-06 09:01:54.666858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.511 [2024-11-06 09:01:54.666892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.511 [2024-11-06 09:01:54.666914] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.511 [2024-11-06 09:01:54.666928] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.511 [2024-11-06 09:01:54.666945] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.511 [2024-11-06 09:01:54.666958] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.511 [2024-11-06 09:01:54.666973] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.511 [2024-11-06 09:01:54.666990] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.511 [2024-11-06 09:01:54.676569] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.511 [2024-11-06 09:01:54.676589] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.511 [2024-11-06 09:01:54.676597] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.511 [2024-11-06 09:01:54.676604] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.511 [2024-11-06 09:01:54.676642] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.511 [2024-11-06 09:01:54.676767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.511 [2024-11-06 09:01:54.676795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.511 [2024-11-06 09:01:54.676812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.511 [2024-11-06 09:01:54.676854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.511 [2024-11-06 09:01:54.676879] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.511 [2024-11-06 09:01:54.676893] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.511 [2024-11-06 09:01:54.676907] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.511 [2024-11-06 09:01:54.676920] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.511 [2024-11-06 09:01:54.676929] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.511 [2024-11-06 09:01:54.676943] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:41.511 [2024-11-06 09:01:54.686676] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.511 [2024-11-06 09:01:54.686698] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.511 [2024-11-06 09:01:54.686706] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.511 [2024-11-06 09:01:54.686713] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.511 [2024-11-06 09:01:54.686754] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:41.511 [2024-11-06 09:01:54.686952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.511 [2024-11-06 09:01:54.686986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.511 [2024-11-06 09:01:54.687004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.511 [2024-11-06 09:01:54.687026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.511 [2024-11-06 09:01:54.687046] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.511 [2024-11-06 09:01:54.687060] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.511 [2024-11-06 09:01:54.687073] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.511 [2024-11-06 09:01:54.687085] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.511 [2024-11-06 09:01:54.687094] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.511 [2024-11-06 09:01:54.687109] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:41.511 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:41.512 [2024-11-06 09:01:54.696788] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.512 [2024-11-06 09:01:54.696826] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.512 [2024-11-06 09:01:54.696844] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.696853] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.512 [2024-11-06 09:01:54.696880] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.697002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.512 [2024-11-06 09:01:54.697030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.512 [2024-11-06 09:01:54.697046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.512 [2024-11-06 09:01:54.697068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.512 [2024-11-06 09:01:54.697088] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.512 [2024-11-06 09:01:54.697102] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.512 [2024-11-06 09:01:54.697116] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.512 [2024-11-06 09:01:54.697136] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.512 [2024-11-06 09:01:54.697145] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.512 [2024-11-06 09:01:54.697160] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.512 [2024-11-06 09:01:54.706914] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.512 [2024-11-06 09:01:54.706941] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.512 [2024-11-06 09:01:54.706951] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.706958] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.512 [2024-11-06 09:01:54.706983] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.707117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.512 [2024-11-06 09:01:54.707145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.512 [2024-11-06 09:01:54.707161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.512 [2024-11-06 09:01:54.707183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.512 [2024-11-06 09:01:54.707203] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.512 [2024-11-06 09:01:54.707217] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.512 [2024-11-06 09:01:54.707230] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.512 [2024-11-06 09:01:54.707242] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.512 [2024-11-06 09:01:54.707251] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.512 [2024-11-06 09:01:54.707266] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.512 [2024-11-06 09:01:54.717019] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.512 [2024-11-06 09:01:54.717041] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.512 [2024-11-06 09:01:54.717050] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.717058] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.512 [2024-11-06 09:01:54.717083] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.717248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.512 [2024-11-06 09:01:54.717289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.512 [2024-11-06 09:01:54.717305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.512 [2024-11-06 09:01:54.717326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.512 [2024-11-06 09:01:54.717346] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.512 [2024-11-06 09:01:54.717359] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.512 [2024-11-06 09:01:54.717372] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.512 [2024-11-06 09:01:54.717384] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.512 [2024-11-06 09:01:54.717393] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.512 [2024-11-06 09:01:54.717416] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:41.512 [2024-11-06 09:01:54.727131] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:41.512 [2024-11-06 09:01:54.727153] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.512 [2024-11-06 09:01:54.727163] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.727185] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.512 [2024-11-06 09:01:54.727210] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.727400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.512 [2024-11-06 09:01:54.727428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.512 [2024-11-06 09:01:54.727444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.512 [2024-11-06 09:01:54.727466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.512 [2024-11-06 09:01:54.727486] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.512 [2024-11-06 09:01:54.727500] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.512 [2024-11-06 09:01:54.727514] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.512 [2024-11-06 09:01:54.727526] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.512 [2024-11-06 09:01:54.727535] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.512 [2024-11-06 09:01:54.727550] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:41.512 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:41.512 [2024-11-06 09:01:54.737246] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.512 [2024-11-06 09:01:54.737269] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.512 [2024-11-06 09:01:54.737283] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.737291] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.512 [2024-11-06 09:01:54.737316] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.512 [2024-11-06 09:01:54.737530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.512 [2024-11-06 09:01:54.737558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.512 [2024-11-06 09:01:54.737575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.512 [2024-11-06 09:01:54.737598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.512 [2024-11-06 09:01:54.737618] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.512 [2024-11-06 09:01:54.737632] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.512 [2024-11-06 09:01:54.737645] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.512 [2024-11-06 09:01:54.737657] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.512 [2024-11-06 09:01:54.737666] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.512 [2024-11-06 09:01:54.737681] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.513 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.513 [2024-11-06 09:01:54.747350] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.513 [2024-11-06 09:01:54.747369] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.513 [2024-11-06 09:01:54.747377] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.513 [2024-11-06 09:01:54.747384] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.513 [2024-11-06 09:01:54.747421] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.513 [2024-11-06 09:01:54.747586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.513 [2024-11-06 09:01:54.747613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.513 [2024-11-06 09:01:54.747630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.513 [2024-11-06 09:01:54.747652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.513 [2024-11-06 09:01:54.747672] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.513 [2024-11-06 09:01:54.747686] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.513 [2024-11-06 09:01:54.747699] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.513 [2024-11-06 09:01:54.747711] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.513 [2024-11-06 09:01:54.747720] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.513 [2024-11-06 09:01:54.747734] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.513 [2024-11-06 09:01:54.757455] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.513 [2024-11-06 09:01:54.757475] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.513 [2024-11-06 09:01:54.757483] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.513 [2024-11-06 09:01:54.757490] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.513 [2024-11-06 09:01:54.757527] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.513 [2024-11-06 09:01:54.757701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.513 [2024-11-06 09:01:54.757742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.513 [2024-11-06 09:01:54.757759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.513 [2024-11-06 09:01:54.757781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.513 [2024-11-06 09:01:54.757812] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.513 [2024-11-06 09:01:54.757844] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.513 [2024-11-06 09:01:54.757861] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.513 [2024-11-06 09:01:54.757873] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.513 [2024-11-06 09:01:54.757882] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.513 [2024-11-06 09:01:54.757897] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:41.513 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:25:41.513 [2024-11-06 09:01:54.767561] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:41.513 [2024-11-06 09:01:54.767581] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:41.513 [2024-11-06 09:01:54.767589] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:41.513 [2024-11-06 09:01:54.767596] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:41.513 [2024-11-06 09:01:54.767632] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:41.513 09:01:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:41.513 [2024-11-06 09:01:54.767829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.513 [2024-11-06 09:01:54.767864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x59a550 with addr=10.0.0.2, port=4420 00:25:41.513 [2024-11-06 09:01:54.767880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59a550 is same with the state(6) to be set 00:25:41.513 [2024-11-06 09:01:54.767902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x59a550 (9): Bad file descriptor 00:25:41.513 [2024-11-06 09:01:54.767958] bdev_nvme.c:7078:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:41.513 [2024-11-06 09:01:54.767987] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:41.513 [2024-11-06 09:01:54.768023] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:41.513 [2024-11-06 09:01:54.768062] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:41.513 [2024-11-06 09:01:54.768077] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:41.513 [2024-11-06 09:01:54.768090] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:41.513 [2024-11-06 09:01:54.768099] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:41.513 [2024-11-06 09:01:54.768135] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:42.886 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.887 09:01:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.887 09:01:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:42.887 09:01:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:42.887 09:01:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:42.887 09:01:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:42.887 09:01:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:42.887 09:01:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.887 09:01:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.820 [2024-11-06 09:01:57.056435] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:43.820 [2024-11-06 09:01:57.056467] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:43.820 [2024-11-06 09:01:57.056489] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.077 [2024-11-06 09:01:57.142747] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:44.335 [2024-11-06 09:01:57.451302] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:44.335 [2024-11-06 09:01:57.452085] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x5c8de0:1 started. 00:25:44.335 [2024-11-06 09:01:57.454187] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.335 [2024-11-06 09:01:57.454229] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.335 [2024-11-06 09:01:57.455773] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x5c8de0 was disconnected and freed. delete nvme_qpair. 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.335 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.335 request: 00:25:44.335 { 00:25:44.335 "name": "nvme", 00:25:44.335 "trtype": "tcp", 00:25:44.335 "traddr": "10.0.0.2", 00:25:44.335 "adrfam": "ipv4", 00:25:44.335 "trsvcid": "8009", 00:25:44.335 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.335 "wait_for_attach": true, 00:25:44.335 "method": "bdev_nvme_start_discovery", 00:25:44.335 "req_id": 1 00:25:44.335 } 00:25:44.335 Got JSON-RPC error response 00:25:44.335 response: 00:25:44.335 { 00:25:44.335 "code": -17, 00:25:44.336 "message": "File exists" 00:25:44.336 } 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.336 request: 00:25:44.336 { 00:25:44.336 "name": "nvme_second", 00:25:44.336 "trtype": "tcp", 00:25:44.336 "traddr": "10.0.0.2", 00:25:44.336 "adrfam": "ipv4", 00:25:44.336 "trsvcid": "8009", 00:25:44.336 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:44.336 "wait_for_attach": true, 00:25:44.336 "method": "bdev_nvme_start_discovery", 00:25:44.336 "req_id": 1 00:25:44.336 } 00:25:44.336 Got JSON-RPC error response 00:25:44.336 response: 00:25:44.336 { 00:25:44.336 "code": -17, 00:25:44.336 "message": "File exists" 00:25:44.336 } 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:44.336 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.594 09:01:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.527 [2024-11-06 09:01:58.641503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.528 [2024-11-06 09:01:58.641562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x597bf0 with addr=10.0.0.2, port=8010 00:25:45.528 [2024-11-06 09:01:58.641587] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:45.528 [2024-11-06 09:01:58.641601] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:45.528 [2024-11-06 09:01:58.641614] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:46.461 [2024-11-06 09:01:59.643923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.461 [2024-11-06 09:01:59.643957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x597bf0 with addr=10.0.0.2, port=8010 00:25:46.461 [2024-11-06 09:01:59.643977] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:46.461 [2024-11-06 09:01:59.643990] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:46.461 [2024-11-06 09:01:59.644001] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:47.394 [2024-11-06 09:02:00.646214] bdev_nvme.c:7334:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:47.394 request: 00:25:47.394 { 00:25:47.394 "name": "nvme_second", 00:25:47.394 "trtype": "tcp", 00:25:47.394 "traddr": "10.0.0.2", 00:25:47.394 "adrfam": "ipv4", 00:25:47.394 "trsvcid": "8010", 00:25:47.394 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:47.394 "wait_for_attach": false, 00:25:47.394 "attach_timeout_ms": 3000, 00:25:47.394 "method": "bdev_nvme_start_discovery", 00:25:47.394 "req_id": 1 00:25:47.394 } 00:25:47.394 Got JSON-RPC error response 00:25:47.394 response: 00:25:47.394 { 00:25:47.394 "code": -110, 00:25:47.394 "message": "Connection timed out" 00:25:47.394 } 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:47.394 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 901649 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:47.651 rmmod nvme_tcp 00:25:47.651 rmmod nvme_fabrics 00:25:47.651 rmmod nvme_keyring 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 901616 ']' 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 901616 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 901616 ']' 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 901616 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 901616 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:47.651 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 901616' 00:25:47.652 killing process with pid 901616 00:25:47.652 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 901616 00:25:47.652 09:02:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 901616 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.909 09:02:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.810 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:49.810 00:25:49.810 real 0m14.615s 00:25:49.810 user 0m21.707s 00:25:49.810 sys 0m2.978s 00:25:49.810 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:49.810 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.810 ************************************ 00:25:49.810 END TEST nvmf_host_discovery 00:25:49.810 ************************************ 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.069 ************************************ 00:25:50.069 START TEST nvmf_host_multipath_status 00:25:50.069 ************************************ 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:50.069 * Looking for test storage... 00:25:50.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # lcov --version 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.069 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:25:50.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.070 --rc genhtml_branch_coverage=1 00:25:50.070 --rc genhtml_function_coverage=1 00:25:50.070 --rc genhtml_legend=1 00:25:50.070 --rc geninfo_all_blocks=1 00:25:50.070 --rc geninfo_unexecuted_blocks=1 00:25:50.070 00:25:50.070 ' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:25:50.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.070 --rc genhtml_branch_coverage=1 00:25:50.070 --rc genhtml_function_coverage=1 00:25:50.070 --rc genhtml_legend=1 00:25:50.070 --rc geninfo_all_blocks=1 00:25:50.070 --rc geninfo_unexecuted_blocks=1 00:25:50.070 00:25:50.070 ' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:25:50.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.070 --rc genhtml_branch_coverage=1 00:25:50.070 --rc genhtml_function_coverage=1 00:25:50.070 --rc genhtml_legend=1 00:25:50.070 --rc geninfo_all_blocks=1 00:25:50.070 --rc geninfo_unexecuted_blocks=1 00:25:50.070 00:25:50.070 ' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:25:50.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.070 --rc genhtml_branch_coverage=1 00:25:50.070 --rc genhtml_function_coverage=1 00:25:50.070 --rc genhtml_legend=1 00:25:50.070 --rc geninfo_all_blocks=1 00:25:50.070 --rc geninfo_unexecuted_blocks=1 00:25:50.070 00:25:50.070 ' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:50.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:50.070 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:50.071 09:02:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:52.600 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:52.600 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:52.600 Found net devices under 0000:09:00.0: cvl_0_0 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:52.600 Found net devices under 0000:09:00.1: cvl_0_1 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:52.600 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:52.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:52.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:25:52.600 00:25:52.600 --- 10.0.0.2 ping statistics --- 00:25:52.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.600 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:52.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:52.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:25:52.601 00:25:52.601 --- 10.0.0.1 ping statistics --- 00:25:52.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:52.601 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=904936 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 904936 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 904936 ']' 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.601 [2024-11-06 09:02:05.611248] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:25:52.601 [2024-11-06 09:02:05.611341] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.601 [2024-11-06 09:02:05.682580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:52.601 [2024-11-06 09:02:05.741068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.601 [2024-11-06 09:02:05.741129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.601 [2024-11-06 09:02:05.741158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.601 [2024-11-06 09:02:05.741169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.601 [2024-11-06 09:02:05.741179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.601 [2024-11-06 09:02:05.742704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.601 [2024-11-06 09:02:05.742709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.601 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=904936 00:25:52.858 09:02:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:53.115 [2024-11-06 09:02:06.155325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.115 09:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:53.373 Malloc0 00:25:53.373 09:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:53.630 09:02:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:53.888 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.146 [2024-11-06 09:02:07.269364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.146 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:54.403 [2024-11-06 09:02:07.525961] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:54.403 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=905222 00:25:54.403 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:54.404 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:54.404 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 905222 /var/tmp/bdevperf.sock 00:25:54.404 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 905222 ']' 00:25:54.404 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:54.404 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:54.404 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:54.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:54.404 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:54.404 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:54.662 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:54.662 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:54.662 09:02:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:54.920 09:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:55.484 Nvme0n1 00:25:55.484 09:02:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:56.085 Nvme0n1 00:25:56.085 09:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:56.085 09:02:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:58.011 09:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:58.011 09:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:58.269 09:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:58.527 09:02:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:59.900 09:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:59.900 09:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:59.900 09:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.900 09:02:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:59.900 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.900 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:59.900 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.900 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.158 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.158 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.158 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.158 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:00.416 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.416 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:00.416 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.416 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:00.673 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.673 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:00.673 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.673 09:02:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:00.932 09:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.932 09:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:00.932 09:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.932 09:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.189 09:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.189 09:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:01.189 09:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:01.755 09:02:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:01.755 09:02:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:03.125 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:03.125 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:03.125 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.125 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.125 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.125 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:03.125 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.125 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.383 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.383 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.383 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.383 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.640 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.640 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.640 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.640 09:02:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.897 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.897 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:03.897 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.897 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:04.154 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.154 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:04.154 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.155 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.412 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.412 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:04.412 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:04.669 09:02:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:05.234 09:02:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:06.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:06.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:06.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.166 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:06.423 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.423 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:06.423 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.423 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:06.681 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.681 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:06.681 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.681 09:02:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.938 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.938 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.938 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.938 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:07.195 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.195 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:07.195 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.195 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:07.452 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.452 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:07.452 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:07.452 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:07.710 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:07.710 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:07.710 09:02:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:07.967 09:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:08.224 09:02:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:09.156 09:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:09.156 09:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:09.156 09:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.156 09:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:09.721 09:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.721 09:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:09.721 09:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.721 09:02:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:09.721 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:09.721 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:09.721 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.721 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:10.286 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.286 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:10.286 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.286 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:10.286 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.286 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:10.286 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.286 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:10.544 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.544 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:10.544 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.544 09:02:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.108 09:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.108 09:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:11.108 09:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:11.365 09:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:11.623 09:02:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:12.554 09:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:12.554 09:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:12.554 09:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.554 09:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:12.811 09:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:12.811 09:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:12.811 09:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:12.811 09:02:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.069 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.069 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.069 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.069 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.327 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.327 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.327 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.327 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.583 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.583 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:13.583 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.583 09:02:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.840 09:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.840 09:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:13.840 09:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.841 09:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.098 09:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.098 09:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:14.098 09:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:14.355 09:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:14.612 09:02:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:15.982 09:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:15.982 09:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:15.982 09:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.982 09:02:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.982 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.982 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:15.982 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.982 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:16.239 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.239 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:16.239 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.239 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.495 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.495 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.495 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.495 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.752 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.752 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:16.752 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.752 09:02:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:17.009 09:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.009 09:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:17.009 09:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.009 09:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:17.266 09:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.266 09:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:17.830 09:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:17.830 09:02:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:17.830 09:02:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:18.087 09:02:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:19.459 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:19.459 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:19.460 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.460 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:19.460 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.460 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:19.460 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.460 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.717 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.717 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.717 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.717 09:02:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.974 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.974 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.974 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.974 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:20.231 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.231 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:20.231 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.231 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:20.489 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.489 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:20.489 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.489 09:02:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:20.746 09:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.746 09:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:20.746 09:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.338 09:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:21.338 09:02:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:22.731 09:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:22.731 09:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:22.731 09:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.731 09:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:22.731 09:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:22.731 09:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:22.731 09:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.731 09:02:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:22.989 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:22.989 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:22.989 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:22.989 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.247 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.247 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.247 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.247 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.504 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.504 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.504 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.504 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.761 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.761 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.761 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.761 09:02:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.018 09:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.018 09:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:24.018 09:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.275 09:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:24.532 09:02:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:25.905 09:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:25.905 09:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:25.905 09:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.905 09:02:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.905 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.905 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:25.905 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.905 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:26.162 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.162 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:26.162 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.162 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.420 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.420 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.420 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.420 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.677 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.677 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:26.677 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.677 09:02:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.934 09:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.934 09:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:26.934 09:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.934 09:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:27.499 09:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.499 09:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:27.499 09:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:27.499 09:02:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:27.757 09:02:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:29.131 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:29.131 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:29.131 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.131 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.131 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.131 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:29.131 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.131 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.389 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.389 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.389 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.389 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:29.647 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.647 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:29.647 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.647 09:02:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:29.905 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.905 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:29.905 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.905 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.163 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.163 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:30.163 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.163 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 905222 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 905222 ']' 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 905222 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 905222 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 905222' 00:26:30.732 killing process with pid 905222 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 905222 00:26:30.732 09:02:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 905222 00:26:30.732 { 00:26:30.732 "results": [ 00:26:30.732 { 00:26:30.732 "job": "Nvme0n1", 00:26:30.732 "core_mask": "0x4", 00:26:30.732 "workload": "verify", 00:26:30.732 "status": "terminated", 00:26:30.732 "verify_range": { 00:26:30.732 "start": 0, 00:26:30.732 "length": 16384 00:26:30.732 }, 00:26:30.732 "queue_depth": 128, 00:26:30.732 "io_size": 4096, 00:26:30.732 "runtime": 34.415443, 00:26:30.732 "iops": 7939.60432239678, 00:26:30.732 "mibps": 31.01407938436242, 00:26:30.732 "io_failed": 0, 00:26:30.732 "io_timeout": 0, 00:26:30.732 "avg_latency_us": 16092.422133667858, 00:26:30.732 "min_latency_us": 497.58814814814815, 00:26:30.732 "max_latency_us": 4026531.84 00:26:30.732 } 00:26:30.732 ], 00:26:30.732 "core_count": 1 00:26:30.732 } 00:26:30.732 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 905222 00:26:30.732 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:30.732 [2024-11-06 09:02:07.591685] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:26:30.732 [2024-11-06 09:02:07.591768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905222 ] 00:26:30.732 [2024-11-06 09:02:07.661593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.733 [2024-11-06 09:02:07.720514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:30.733 Running I/O for 90 seconds... 00:26:30.733 8419.00 IOPS, 32.89 MiB/s [2024-11-06T08:02:44.022Z] 8583.50 IOPS, 33.53 MiB/s [2024-11-06T08:02:44.022Z] 8529.33 IOPS, 33.32 MiB/s [2024-11-06T08:02:44.022Z] 8562.50 IOPS, 33.45 MiB/s [2024-11-06T08:02:44.022Z] 8555.40 IOPS, 33.42 MiB/s [2024-11-06T08:02:44.022Z] 8568.50 IOPS, 33.47 MiB/s [2024-11-06T08:02:44.022Z] 8552.71 IOPS, 33.41 MiB/s [2024-11-06T08:02:44.022Z] 8561.50 IOPS, 33.44 MiB/s [2024-11-06T08:02:44.022Z] 8569.33 IOPS, 33.47 MiB/s [2024-11-06T08:02:44.022Z] 8564.50 IOPS, 33.46 MiB/s [2024-11-06T08:02:44.022Z] 8560.36 IOPS, 33.44 MiB/s [2024-11-06T08:02:44.022Z] 8544.58 IOPS, 33.38 MiB/s [2024-11-06T08:02:44.022Z] 8547.00 IOPS, 33.39 MiB/s [2024-11-06T08:02:44.022Z] 8550.43 IOPS, 33.40 MiB/s [2024-11-06T08:02:44.022Z] 8556.27 IOPS, 33.42 MiB/s [2024-11-06T08:02:44.022Z] [2024-11-06 09:02:24.388970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.733 [2024-11-06 09:02:24.389025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.389946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.389980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.390957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.390982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.391018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.733 [2024-11-06 09:02:24.391044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:30.733 [2024-11-06 09:02:24.391082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.391108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.391159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.391185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.391221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.391253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.391826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.391867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.391914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.391943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.391984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.392951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.392980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.393955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.393982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.394021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.394049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.394092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.394121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.394161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.394190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.394229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.394256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.394310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.394336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.734 [2024-11-06 09:02:24.394392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.734 [2024-11-06 09:02:24.394420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.394468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.394496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.394536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.394564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.394604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.394631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.394685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.394727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.394768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.394796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.394846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.394876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.394916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.394943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.394983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:115008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.395938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:115016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.395967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:115032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.396945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.396971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.397014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.397041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.397085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.397113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.397170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.397198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.397239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:115136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.397264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.735 [2024-11-06 09:02:24.397306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.735 [2024-11-06 09:02:24.397333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.397396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.397423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.397481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.397508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.397553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.397581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.397626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.397654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.397696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.736 [2024-11-06 09:02:24.397724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.397768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.736 [2024-11-06 09:02:24.397801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.397857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.736 [2024-11-06 09:02:24.397887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.397930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.736 [2024-11-06 09:02:24.397957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:114280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.736 [2024-11-06 09:02:24.398031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.736 [2024-11-06 09:02:24.398104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.736 [2024-11-06 09:02:24.398174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:115200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:115216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:24.398944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:24.398971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:30.736 8060.50 IOPS, 31.49 MiB/s [2024-11-06T08:02:44.025Z] 7586.35 IOPS, 29.63 MiB/s [2024-11-06T08:02:44.025Z] 7164.89 IOPS, 27.99 MiB/s [2024-11-06T08:02:44.025Z] 6787.79 IOPS, 26.51 MiB/s [2024-11-06T08:02:44.025Z] 6845.05 IOPS, 26.74 MiB/s [2024-11-06T08:02:44.025Z] 6925.29 IOPS, 27.05 MiB/s [2024-11-06T08:02:44.025Z] 7024.18 IOPS, 27.44 MiB/s [2024-11-06T08:02:44.025Z] 7172.17 IOPS, 28.02 MiB/s [2024-11-06T08:02:44.025Z] 7314.25 IOPS, 28.57 MiB/s [2024-11-06T08:02:44.025Z] 7458.88 IOPS, 29.14 MiB/s [2024-11-06T08:02:44.025Z] 7504.19 IOPS, 29.31 MiB/s [2024-11-06T08:02:44.025Z] 7542.89 IOPS, 29.46 MiB/s [2024-11-06T08:02:44.025Z] 7578.89 IOPS, 29.61 MiB/s [2024-11-06T08:02:44.025Z] 7643.69 IOPS, 29.86 MiB/s [2024-11-06T08:02:44.025Z] 7735.87 IOPS, 30.22 MiB/s [2024-11-06T08:02:44.025Z] 7828.84 IOPS, 30.58 MiB/s [2024-11-06T08:02:44.025Z] [2024-11-06 09:02:41.026337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.736 [2024-11-06 09:02:41.026404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.026499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.026556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.026595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.026636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.026677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.026740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.026781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.026809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.026875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.026902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.026939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.026965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.027015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.027040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.027073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.027097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.027153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.027179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.027214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.027240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.027274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.027302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.027336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.027361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.027395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.736 [2024-11-06 09:02:41.027420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:30.736 [2024-11-06 09:02:41.027477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.027504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.027542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.027568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.027606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.027633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.027669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.027696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.027731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.027758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.027806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.027856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.027905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.027932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.027971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.027998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.028037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.028064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.028103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.028131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.029952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.029988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.030946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.030984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.031012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.031048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.031076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.031136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.031165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.031204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.031232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.031270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.031297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.031334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.737 [2024-11-06 09:02:41.031377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.031412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.031438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:30.737 [2024-11-06 09:02:41.031475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.737 [2024-11-06 09:02:41.031501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.031538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.031564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.031601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.031629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.031664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.031691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.031727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.031761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.031798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.031847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.031887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.031914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.031953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.738 [2024-11-06 09:02:41.031980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.032017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.032045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.032081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.032109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.032146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.032173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.032209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.032236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.032273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.032300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.032338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.032379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.032416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.032443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.032480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.032507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.033424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.738 [2024-11-06 09:02:41.033459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:30.738 [2024-11-06 09:02:41.033509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.738 [2024-11-06 09:02:41.033537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:30.738 7898.53 IOPS, 30.85 MiB/s [2024-11-06T08:02:44.027Z] 7915.85 IOPS, 30.92 MiB/s [2024-11-06T08:02:44.027Z] 7934.65 IOPS, 30.99 MiB/s [2024-11-06T08:02:44.027Z] Received shutdown signal, test time was about 34.416202 seconds 00:26:30.738 00:26:30.738 Latency(us) 00:26:30.738 [2024-11-06T08:02:44.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.738 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:30.738 Verification LBA range: start 0x0 length 0x4000 00:26:30.738 Nvme0n1 : 34.42 7939.60 31.01 0.00 0.00 16092.42 497.59 4026531.84 00:26:30.738 [2024-11-06T08:02:44.027Z] =================================================================================================================== 00:26:30.738 [2024-11-06T08:02:44.027Z] Total : 7939.60 31.01 0.00 0.00 16092.42 497.59 4026531.84 00:26:30.738 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.303 rmmod nvme_tcp 00:26:31.303 rmmod nvme_fabrics 00:26:31.303 rmmod nvme_keyring 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 904936 ']' 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 904936 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 904936 ']' 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 904936 00:26:31.303 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:31.304 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:31.304 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 904936 00:26:31.304 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:31.304 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:31.304 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 904936' 00:26:31.304 killing process with pid 904936 00:26:31.304 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 904936 00:26:31.304 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 904936 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.564 09:02:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.468 09:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.468 00:26:33.468 real 0m43.545s 00:26:33.468 user 2m7.494s 00:26:33.468 sys 0m13.095s 00:26:33.468 09:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.468 09:02:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:33.468 ************************************ 00:26:33.468 END TEST nvmf_host_multipath_status 00:26:33.468 ************************************ 00:26:33.468 09:02:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:33.468 09:02:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:33.468 09:02:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.468 09:02:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.468 ************************************ 00:26:33.468 START TEST nvmf_discovery_remove_ifc 00:26:33.468 ************************************ 00:26:33.468 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:33.727 * Looking for test storage... 00:26:33.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # lcov --version 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:26:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.727 --rc genhtml_branch_coverage=1 00:26:33.727 --rc genhtml_function_coverage=1 00:26:33.727 --rc genhtml_legend=1 00:26:33.727 --rc geninfo_all_blocks=1 00:26:33.727 --rc geninfo_unexecuted_blocks=1 00:26:33.727 00:26:33.727 ' 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:26:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.727 --rc genhtml_branch_coverage=1 00:26:33.727 --rc genhtml_function_coverage=1 00:26:33.727 --rc genhtml_legend=1 00:26:33.727 --rc geninfo_all_blocks=1 00:26:33.727 --rc geninfo_unexecuted_blocks=1 00:26:33.727 00:26:33.727 ' 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:26:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.727 --rc genhtml_branch_coverage=1 00:26:33.727 --rc genhtml_function_coverage=1 00:26:33.727 --rc genhtml_legend=1 00:26:33.727 --rc geninfo_all_blocks=1 00:26:33.727 --rc geninfo_unexecuted_blocks=1 00:26:33.727 00:26:33.727 ' 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:26:33.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.727 --rc genhtml_branch_coverage=1 00:26:33.727 --rc genhtml_function_coverage=1 00:26:33.727 --rc genhtml_legend=1 00:26:33.727 --rc geninfo_all_blocks=1 00:26:33.727 --rc geninfo_unexecuted_blocks=1 00:26:33.727 00:26:33.727 ' 00:26:33.727 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.728 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.728 09:02:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:36.259 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:36.260 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:36.260 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:36.260 Found net devices under 0000:09:00.0: cvl_0_0 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:36.260 Found net devices under 0000:09:00.1: cvl_0_1 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:36.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:36.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:26:36.260 00:26:36.260 --- 10.0.0.2 ping statistics --- 00:26:36.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.260 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:36.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:36.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:26:36.260 00:26:36.260 --- 10.0.0.1 ping statistics --- 00:26:36.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:36.260 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=911695 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 911695 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 911695 ']' 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.260 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.260 [2024-11-06 09:02:49.318821] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:26:36.260 [2024-11-06 09:02:49.318928] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:36.260 [2024-11-06 09:02:49.392471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.260 [2024-11-06 09:02:49.445209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:36.260 [2024-11-06 09:02:49.445267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:36.260 [2024-11-06 09:02:49.445295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:36.260 [2024-11-06 09:02:49.445313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:36.260 [2024-11-06 09:02:49.445323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:36.260 [2024-11-06 09:02:49.445897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.517 [2024-11-06 09:02:49.588586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.517 [2024-11-06 09:02:49.596751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:36.517 null0 00:26:36.517 [2024-11-06 09:02:49.628721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=911729 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 911729 /tmp/host.sock 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 911729 ']' 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:36.517 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:36.517 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.518 [2024-11-06 09:02:49.693875] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:26:36.518 [2024-11-06 09:02:49.693958] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911729 ] 00:26:36.518 [2024-11-06 09:02:49.757716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.775 [2024-11-06 09:02:49.814454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.775 09:02:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:36.775 09:02:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.775 09:02:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:36.775 09:02:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.775 09:02:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.146 [2024-11-06 09:02:51.045679] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:38.146 [2024-11-06 09:02:51.045701] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:38.146 [2024-11-06 09:02:51.045726] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:38.146 [2024-11-06 09:02:51.134039] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:38.146 [2024-11-06 09:02:51.194728] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:38.146 [2024-11-06 09:02:51.195689] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xed2ba0:1 started. 00:26:38.146 [2024-11-06 09:02:51.197395] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:38.146 [2024-11-06 09:02:51.197448] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:38.146 [2024-11-06 09:02:51.197481] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:38.146 [2024-11-06 09:02:51.197503] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:38.146 [2024-11-06 09:02:51.197526] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.146 [2024-11-06 09:02:51.204357] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xed2ba0 was disconnected and freed. delete nvme_qpair. 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.146 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:38.147 09:02:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.078 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:39.078 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:39.078 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:39.078 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.078 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:39.079 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:39.079 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:39.079 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.336 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:39.336 09:02:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:40.268 09:02:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:41.201 09:02:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:42.573 09:02:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:43.507 09:02:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:43.507 [2024-11-06 09:02:56.638948] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:43.507 [2024-11-06 09:02:56.639007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.507 [2024-11-06 09:02:56.639026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.507 [2024-11-06 09:02:56.639043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.507 [2024-11-06 09:02:56.639056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.507 [2024-11-06 09:02:56.639069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.507 [2024-11-06 09:02:56.639081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.507 [2024-11-06 09:02:56.639100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.507 [2024-11-06 09:02:56.639113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.507 [2024-11-06 09:02:56.639126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:43.507 [2024-11-06 09:02:56.639138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.507 [2024-11-06 09:02:56.639150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf400 is same with the state(6) to be set 00:26:43.507 [2024-11-06 09:02:56.648969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeaf400 (9): Bad file descriptor 00:26:43.507 [2024-11-06 09:02:56.659012] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:43.507 [2024-11-06 09:02:56.659034] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:43.507 [2024-11-06 09:02:56.659045] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:43.507 [2024-11-06 09:02:56.659053] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:43.507 [2024-11-06 09:02:56.659092] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.441 [2024-11-06 09:02:57.702871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:44.441 [2024-11-06 09:02:57.702929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeaf400 with addr=10.0.0.2, port=4420 00:26:44.441 [2024-11-06 09:02:57.702949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaf400 is same with the state(6) to be set 00:26:44.441 [2024-11-06 09:02:57.702981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeaf400 (9): Bad file descriptor 00:26:44.441 [2024-11-06 09:02:57.703378] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:44.441 [2024-11-06 09:02:57.703417] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:44.441 [2024-11-06 09:02:57.703434] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:44.441 [2024-11-06 09:02:57.703449] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:44.441 [2024-11-06 09:02:57.703461] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:44.441 [2024-11-06 09:02:57.703472] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:44.441 [2024-11-06 09:02:57.703491] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:44.441 [2024-11-06 09:02:57.703507] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:44.441 [2024-11-06 09:02:57.703521] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:44.441 09:02:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:45.813 [2024-11-06 09:02:58.706009] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:45.813 [2024-11-06 09:02:58.706052] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:45.813 [2024-11-06 09:02:58.706077] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:45.813 [2024-11-06 09:02:58.706091] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:45.813 [2024-11-06 09:02:58.706105] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:45.813 [2024-11-06 09:02:58.706134] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:45.813 [2024-11-06 09:02:58.706147] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:45.814 [2024-11-06 09:02:58.706172] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:45.814 [2024-11-06 09:02:58.706229] bdev_nvme.c:7042:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:45.814 [2024-11-06 09:02:58.706289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.814 [2024-11-06 09:02:58.706310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.814 [2024-11-06 09:02:58.706329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.814 [2024-11-06 09:02:58.706342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.814 [2024-11-06 09:02:58.706356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.814 [2024-11-06 09:02:58.706368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.814 [2024-11-06 09:02:58.706382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.814 [2024-11-06 09:02:58.706395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.814 [2024-11-06 09:02:58.706409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:45.814 [2024-11-06 09:02:58.706422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:45.814 [2024-11-06 09:02:58.706435] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:45.814 [2024-11-06 09:02:58.706526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9eb40 (9): Bad file descriptor 00:26:45.814 [2024-11-06 09:02:58.707551] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:45.814 [2024-11-06 09:02:58.707572] nvme_ctrlr.c:1190:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:45.814 09:02:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:46.746 09:02:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.678 [2024-11-06 09:03:00.762042] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:47.678 [2024-11-06 09:03:00.762067] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:47.678 [2024-11-06 09:03:00.762089] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.678 [2024-11-06 09:03:00.848401] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:47.678 09:03:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.936 [2024-11-06 09:03:01.030473] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:47.936 [2024-11-06 09:03:01.031287] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xeb95f0:1 started. 00:26:47.936 [2024-11-06 09:03:01.032597] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:47.936 [2024-11-06 09:03:01.032637] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:47.936 [2024-11-06 09:03:01.032665] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:47.936 [2024-11-06 09:03:01.032685] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:47.936 [2024-11-06 09:03:01.032698] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:47.936 [2024-11-06 09:03:01.040308] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xeb95f0 was disconnected and freed. delete nvme_qpair. 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 911729 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 911729 ']' 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 911729 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:48.869 09:03:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 911729 00:26:48.869 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:48.869 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:48.869 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 911729' 00:26:48.869 killing process with pid 911729 00:26:48.869 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 911729 00:26:48.869 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 911729 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.159 rmmod nvme_tcp 00:26:49.159 rmmod nvme_fabrics 00:26:49.159 rmmod nvme_keyring 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 911695 ']' 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 911695 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 911695 ']' 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 911695 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 911695 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 911695' 00:26:49.159 killing process with pid 911695 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 911695 00:26:49.159 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 911695 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.440 09:03:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.342 09:03:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:51.342 00:26:51.342 real 0m17.877s 00:26:51.342 user 0m25.728s 00:26:51.342 sys 0m3.080s 00:26:51.342 09:03:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.342 09:03:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.342 ************************************ 00:26:51.342 END TEST nvmf_discovery_remove_ifc 00:26:51.342 ************************************ 00:26:51.342 09:03:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:51.342 09:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:51.342 09:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:51.342 09:03:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.601 ************************************ 00:26:51.601 START TEST nvmf_identify_kernel_target 00:26:51.601 ************************************ 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:51.601 * Looking for test storage... 00:26:51.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # lcov --version 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:26:51.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.601 --rc genhtml_branch_coverage=1 00:26:51.601 --rc genhtml_function_coverage=1 00:26:51.601 --rc genhtml_legend=1 00:26:51.601 --rc geninfo_all_blocks=1 00:26:51.601 --rc geninfo_unexecuted_blocks=1 00:26:51.601 00:26:51.601 ' 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:26:51.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.601 --rc genhtml_branch_coverage=1 00:26:51.601 --rc genhtml_function_coverage=1 00:26:51.601 --rc genhtml_legend=1 00:26:51.601 --rc geninfo_all_blocks=1 00:26:51.601 --rc geninfo_unexecuted_blocks=1 00:26:51.601 00:26:51.601 ' 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:26:51.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.601 --rc genhtml_branch_coverage=1 00:26:51.601 --rc genhtml_function_coverage=1 00:26:51.601 --rc genhtml_legend=1 00:26:51.601 --rc geninfo_all_blocks=1 00:26:51.601 --rc geninfo_unexecuted_blocks=1 00:26:51.601 00:26:51.601 ' 00:26:51.601 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:26:51.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.602 --rc genhtml_branch_coverage=1 00:26:51.602 --rc genhtml_function_coverage=1 00:26:51.602 --rc genhtml_legend=1 00:26:51.602 --rc geninfo_all_blocks=1 00:26:51.602 --rc geninfo_unexecuted_blocks=1 00:26:51.602 00:26:51.602 ' 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:51.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.602 09:03:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:54.130 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:54.131 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:54.131 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:54.131 Found net devices under 0000:09:00.0: cvl_0_0 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:54.131 Found net devices under 0000:09:00.1: cvl_0_1 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.131 09:03:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:54.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:26:54.132 00:26:54.132 --- 10.0.0.2 ping statistics --- 00:26:54.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.132 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:26:54.132 00:26:54.132 --- 10.0.0.1 ping statistics --- 00:26:54.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.132 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:54.132 09:03:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:55.064 Waiting for block devices as requested 00:26:55.064 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:55.064 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:55.322 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:55.322 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:55.322 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:55.580 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:55.580 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:55.580 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:55.580 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:26:55.838 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:55.838 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:55.838 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:56.096 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:56.096 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:56.096 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:56.096 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:56.354 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:56.354 No valid GPT data, bailing 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:26:56.354 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:56.611 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:26:56.611 00:26:56.611 Discovery Log Number of Records 2, Generation counter 2 00:26:56.611 =====Discovery Log Entry 0====== 00:26:56.611 trtype: tcp 00:26:56.611 adrfam: ipv4 00:26:56.611 subtype: current discovery subsystem 00:26:56.611 treq: not specified, sq flow control disable supported 00:26:56.611 portid: 1 00:26:56.611 trsvcid: 4420 00:26:56.611 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:56.611 traddr: 10.0.0.1 00:26:56.611 eflags: none 00:26:56.611 sectype: none 00:26:56.611 =====Discovery Log Entry 1====== 00:26:56.611 trtype: tcp 00:26:56.611 adrfam: ipv4 00:26:56.611 subtype: nvme subsystem 00:26:56.611 treq: not specified, sq flow control disable supported 00:26:56.611 portid: 1 00:26:56.612 trsvcid: 4420 00:26:56.612 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:56.612 traddr: 10.0.0.1 00:26:56.612 eflags: none 00:26:56.612 sectype: none 00:26:56.612 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:56.612 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:56.612 ===================================================== 00:26:56.612 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:56.612 ===================================================== 00:26:56.612 Controller Capabilities/Features 00:26:56.612 ================================ 00:26:56.612 Vendor ID: 0000 00:26:56.612 Subsystem Vendor ID: 0000 00:26:56.612 Serial Number: c547a31e86c1635ac7f9 00:26:56.612 Model Number: Linux 00:26:56.612 Firmware Version: 6.8.9-20 00:26:56.612 Recommended Arb Burst: 0 00:26:56.612 IEEE OUI Identifier: 00 00 00 00:26:56.612 Multi-path I/O 00:26:56.612 May have multiple subsystem ports: No 00:26:56.612 May have multiple controllers: No 00:26:56.612 Associated with SR-IOV VF: No 00:26:56.612 Max Data Transfer Size: Unlimited 00:26:56.612 Max Number of Namespaces: 0 00:26:56.612 Max Number of I/O Queues: 1024 00:26:56.612 NVMe Specification Version (VS): 1.3 00:26:56.612 NVMe Specification Version (Identify): 1.3 00:26:56.612 Maximum Queue Entries: 1024 00:26:56.612 Contiguous Queues Required: No 00:26:56.612 Arbitration Mechanisms Supported 00:26:56.612 Weighted Round Robin: Not Supported 00:26:56.612 Vendor Specific: Not Supported 00:26:56.612 Reset Timeout: 7500 ms 00:26:56.612 Doorbell Stride: 4 bytes 00:26:56.612 NVM Subsystem Reset: Not Supported 00:26:56.612 Command Sets Supported 00:26:56.612 NVM Command Set: Supported 00:26:56.612 Boot Partition: Not Supported 00:26:56.612 Memory Page Size Minimum: 4096 bytes 00:26:56.612 Memory Page Size Maximum: 4096 bytes 00:26:56.612 Persistent Memory Region: Not Supported 00:26:56.612 Optional Asynchronous Events Supported 00:26:56.612 Namespace Attribute Notices: Not Supported 00:26:56.612 Firmware Activation Notices: Not Supported 00:26:56.612 ANA Change Notices: Not Supported 00:26:56.612 PLE Aggregate Log Change Notices: Not Supported 00:26:56.612 LBA Status Info Alert Notices: Not Supported 00:26:56.612 EGE Aggregate Log Change Notices: Not Supported 00:26:56.612 Normal NVM Subsystem Shutdown event: Not Supported 00:26:56.612 Zone Descriptor Change Notices: Not Supported 00:26:56.612 Discovery Log Change Notices: Supported 00:26:56.612 Controller Attributes 00:26:56.612 128-bit Host Identifier: Not Supported 00:26:56.612 Non-Operational Permissive Mode: Not Supported 00:26:56.612 NVM Sets: Not Supported 00:26:56.612 Read Recovery Levels: Not Supported 00:26:56.612 Endurance Groups: Not Supported 00:26:56.612 Predictable Latency Mode: Not Supported 00:26:56.612 Traffic Based Keep ALive: Not Supported 00:26:56.612 Namespace Granularity: Not Supported 00:26:56.612 SQ Associations: Not Supported 00:26:56.612 UUID List: Not Supported 00:26:56.612 Multi-Domain Subsystem: Not Supported 00:26:56.612 Fixed Capacity Management: Not Supported 00:26:56.612 Variable Capacity Management: Not Supported 00:26:56.612 Delete Endurance Group: Not Supported 00:26:56.612 Delete NVM Set: Not Supported 00:26:56.612 Extended LBA Formats Supported: Not Supported 00:26:56.612 Flexible Data Placement Supported: Not Supported 00:26:56.612 00:26:56.612 Controller Memory Buffer Support 00:26:56.612 ================================ 00:26:56.612 Supported: No 00:26:56.612 00:26:56.612 Persistent Memory Region Support 00:26:56.612 ================================ 00:26:56.612 Supported: No 00:26:56.612 00:26:56.612 Admin Command Set Attributes 00:26:56.612 ============================ 00:26:56.612 Security Send/Receive: Not Supported 00:26:56.612 Format NVM: Not Supported 00:26:56.612 Firmware Activate/Download: Not Supported 00:26:56.612 Namespace Management: Not Supported 00:26:56.612 Device Self-Test: Not Supported 00:26:56.612 Directives: Not Supported 00:26:56.612 NVMe-MI: Not Supported 00:26:56.612 Virtualization Management: Not Supported 00:26:56.612 Doorbell Buffer Config: Not Supported 00:26:56.612 Get LBA Status Capability: Not Supported 00:26:56.612 Command & Feature Lockdown Capability: Not Supported 00:26:56.612 Abort Command Limit: 1 00:26:56.612 Async Event Request Limit: 1 00:26:56.612 Number of Firmware Slots: N/A 00:26:56.612 Firmware Slot 1 Read-Only: N/A 00:26:56.612 Firmware Activation Without Reset: N/A 00:26:56.612 Multiple Update Detection Support: N/A 00:26:56.612 Firmware Update Granularity: No Information Provided 00:26:56.612 Per-Namespace SMART Log: No 00:26:56.612 Asymmetric Namespace Access Log Page: Not Supported 00:26:56.612 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:56.612 Command Effects Log Page: Not Supported 00:26:56.612 Get Log Page Extended Data: Supported 00:26:56.612 Telemetry Log Pages: Not Supported 00:26:56.612 Persistent Event Log Pages: Not Supported 00:26:56.612 Supported Log Pages Log Page: May Support 00:26:56.612 Commands Supported & Effects Log Page: Not Supported 00:26:56.612 Feature Identifiers & Effects Log Page:May Support 00:26:56.612 NVMe-MI Commands & Effects Log Page: May Support 00:26:56.612 Data Area 4 for Telemetry Log: Not Supported 00:26:56.612 Error Log Page Entries Supported: 1 00:26:56.612 Keep Alive: Not Supported 00:26:56.612 00:26:56.612 NVM Command Set Attributes 00:26:56.612 ========================== 00:26:56.612 Submission Queue Entry Size 00:26:56.612 Max: 1 00:26:56.612 Min: 1 00:26:56.612 Completion Queue Entry Size 00:26:56.612 Max: 1 00:26:56.612 Min: 1 00:26:56.612 Number of Namespaces: 0 00:26:56.612 Compare Command: Not Supported 00:26:56.612 Write Uncorrectable Command: Not Supported 00:26:56.612 Dataset Management Command: Not Supported 00:26:56.612 Write Zeroes Command: Not Supported 00:26:56.612 Set Features Save Field: Not Supported 00:26:56.612 Reservations: Not Supported 00:26:56.612 Timestamp: Not Supported 00:26:56.612 Copy: Not Supported 00:26:56.612 Volatile Write Cache: Not Present 00:26:56.612 Atomic Write Unit (Normal): 1 00:26:56.612 Atomic Write Unit (PFail): 1 00:26:56.612 Atomic Compare & Write Unit: 1 00:26:56.612 Fused Compare & Write: Not Supported 00:26:56.612 Scatter-Gather List 00:26:56.612 SGL Command Set: Supported 00:26:56.612 SGL Keyed: Not Supported 00:26:56.612 SGL Bit Bucket Descriptor: Not Supported 00:26:56.612 SGL Metadata Pointer: Not Supported 00:26:56.612 Oversized SGL: Not Supported 00:26:56.612 SGL Metadata Address: Not Supported 00:26:56.612 SGL Offset: Supported 00:26:56.612 Transport SGL Data Block: Not Supported 00:26:56.612 Replay Protected Memory Block: Not Supported 00:26:56.612 00:26:56.612 Firmware Slot Information 00:26:56.612 ========================= 00:26:56.612 Active slot: 0 00:26:56.612 00:26:56.612 00:26:56.612 Error Log 00:26:56.612 ========= 00:26:56.612 00:26:56.612 Active Namespaces 00:26:56.612 ================= 00:26:56.612 Discovery Log Page 00:26:56.612 ================== 00:26:56.612 Generation Counter: 2 00:26:56.612 Number of Records: 2 00:26:56.612 Record Format: 0 00:26:56.612 00:26:56.612 Discovery Log Entry 0 00:26:56.612 ---------------------- 00:26:56.612 Transport Type: 3 (TCP) 00:26:56.612 Address Family: 1 (IPv4) 00:26:56.612 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:56.612 Entry Flags: 00:26:56.612 Duplicate Returned Information: 0 00:26:56.612 Explicit Persistent Connection Support for Discovery: 0 00:26:56.612 Transport Requirements: 00:26:56.612 Secure Channel: Not Specified 00:26:56.612 Port ID: 1 (0x0001) 00:26:56.612 Controller ID: 65535 (0xffff) 00:26:56.612 Admin Max SQ Size: 32 00:26:56.612 Transport Service Identifier: 4420 00:26:56.612 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:56.612 Transport Address: 10.0.0.1 00:26:56.612 Discovery Log Entry 1 00:26:56.612 ---------------------- 00:26:56.612 Transport Type: 3 (TCP) 00:26:56.612 Address Family: 1 (IPv4) 00:26:56.613 Subsystem Type: 2 (NVM Subsystem) 00:26:56.613 Entry Flags: 00:26:56.613 Duplicate Returned Information: 0 00:26:56.613 Explicit Persistent Connection Support for Discovery: 0 00:26:56.613 Transport Requirements: 00:26:56.613 Secure Channel: Not Specified 00:26:56.613 Port ID: 1 (0x0001) 00:26:56.613 Controller ID: 65535 (0xffff) 00:26:56.613 Admin Max SQ Size: 32 00:26:56.613 Transport Service Identifier: 4420 00:26:56.613 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:56.613 Transport Address: 10.0.0.1 00:26:56.613 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:56.873 get_feature(0x01) failed 00:26:56.873 get_feature(0x02) failed 00:26:56.873 get_feature(0x04) failed 00:26:56.873 ===================================================== 00:26:56.873 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:56.873 ===================================================== 00:26:56.873 Controller Capabilities/Features 00:26:56.873 ================================ 00:26:56.873 Vendor ID: 0000 00:26:56.873 Subsystem Vendor ID: 0000 00:26:56.873 Serial Number: 93b17effd9bad1970f4a 00:26:56.873 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:56.873 Firmware Version: 6.8.9-20 00:26:56.873 Recommended Arb Burst: 6 00:26:56.873 IEEE OUI Identifier: 00 00 00 00:26:56.873 Multi-path I/O 00:26:56.873 May have multiple subsystem ports: Yes 00:26:56.873 May have multiple controllers: Yes 00:26:56.873 Associated with SR-IOV VF: No 00:26:56.873 Max Data Transfer Size: Unlimited 00:26:56.873 Max Number of Namespaces: 1024 00:26:56.873 Max Number of I/O Queues: 128 00:26:56.873 NVMe Specification Version (VS): 1.3 00:26:56.873 NVMe Specification Version (Identify): 1.3 00:26:56.873 Maximum Queue Entries: 1024 00:26:56.873 Contiguous Queues Required: No 00:26:56.873 Arbitration Mechanisms Supported 00:26:56.873 Weighted Round Robin: Not Supported 00:26:56.873 Vendor Specific: Not Supported 00:26:56.873 Reset Timeout: 7500 ms 00:26:56.873 Doorbell Stride: 4 bytes 00:26:56.873 NVM Subsystem Reset: Not Supported 00:26:56.873 Command Sets Supported 00:26:56.873 NVM Command Set: Supported 00:26:56.873 Boot Partition: Not Supported 00:26:56.873 Memory Page Size Minimum: 4096 bytes 00:26:56.873 Memory Page Size Maximum: 4096 bytes 00:26:56.873 Persistent Memory Region: Not Supported 00:26:56.873 Optional Asynchronous Events Supported 00:26:56.873 Namespace Attribute Notices: Supported 00:26:56.873 Firmware Activation Notices: Not Supported 00:26:56.873 ANA Change Notices: Supported 00:26:56.873 PLE Aggregate Log Change Notices: Not Supported 00:26:56.873 LBA Status Info Alert Notices: Not Supported 00:26:56.873 EGE Aggregate Log Change Notices: Not Supported 00:26:56.873 Normal NVM Subsystem Shutdown event: Not Supported 00:26:56.873 Zone Descriptor Change Notices: Not Supported 00:26:56.873 Discovery Log Change Notices: Not Supported 00:26:56.873 Controller Attributes 00:26:56.873 128-bit Host Identifier: Supported 00:26:56.873 Non-Operational Permissive Mode: Not Supported 00:26:56.873 NVM Sets: Not Supported 00:26:56.873 Read Recovery Levels: Not Supported 00:26:56.873 Endurance Groups: Not Supported 00:26:56.873 Predictable Latency Mode: Not Supported 00:26:56.873 Traffic Based Keep ALive: Supported 00:26:56.873 Namespace Granularity: Not Supported 00:26:56.873 SQ Associations: Not Supported 00:26:56.873 UUID List: Not Supported 00:26:56.873 Multi-Domain Subsystem: Not Supported 00:26:56.873 Fixed Capacity Management: Not Supported 00:26:56.873 Variable Capacity Management: Not Supported 00:26:56.873 Delete Endurance Group: Not Supported 00:26:56.873 Delete NVM Set: Not Supported 00:26:56.873 Extended LBA Formats Supported: Not Supported 00:26:56.873 Flexible Data Placement Supported: Not Supported 00:26:56.873 00:26:56.873 Controller Memory Buffer Support 00:26:56.873 ================================ 00:26:56.873 Supported: No 00:26:56.873 00:26:56.873 Persistent Memory Region Support 00:26:56.873 ================================ 00:26:56.873 Supported: No 00:26:56.873 00:26:56.873 Admin Command Set Attributes 00:26:56.873 ============================ 00:26:56.873 Security Send/Receive: Not Supported 00:26:56.873 Format NVM: Not Supported 00:26:56.873 Firmware Activate/Download: Not Supported 00:26:56.873 Namespace Management: Not Supported 00:26:56.873 Device Self-Test: Not Supported 00:26:56.873 Directives: Not Supported 00:26:56.873 NVMe-MI: Not Supported 00:26:56.873 Virtualization Management: Not Supported 00:26:56.873 Doorbell Buffer Config: Not Supported 00:26:56.873 Get LBA Status Capability: Not Supported 00:26:56.873 Command & Feature Lockdown Capability: Not Supported 00:26:56.873 Abort Command Limit: 4 00:26:56.873 Async Event Request Limit: 4 00:26:56.873 Number of Firmware Slots: N/A 00:26:56.873 Firmware Slot 1 Read-Only: N/A 00:26:56.873 Firmware Activation Without Reset: N/A 00:26:56.873 Multiple Update Detection Support: N/A 00:26:56.873 Firmware Update Granularity: No Information Provided 00:26:56.873 Per-Namespace SMART Log: Yes 00:26:56.873 Asymmetric Namespace Access Log Page: Supported 00:26:56.873 ANA Transition Time : 10 sec 00:26:56.873 00:26:56.873 Asymmetric Namespace Access Capabilities 00:26:56.873 ANA Optimized State : Supported 00:26:56.873 ANA Non-Optimized State : Supported 00:26:56.873 ANA Inaccessible State : Supported 00:26:56.873 ANA Persistent Loss State : Supported 00:26:56.873 ANA Change State : Supported 00:26:56.873 ANAGRPID is not changed : No 00:26:56.873 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:56.873 00:26:56.873 ANA Group Identifier Maximum : 128 00:26:56.873 Number of ANA Group Identifiers : 128 00:26:56.873 Max Number of Allowed Namespaces : 1024 00:26:56.873 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:56.873 Command Effects Log Page: Supported 00:26:56.873 Get Log Page Extended Data: Supported 00:26:56.873 Telemetry Log Pages: Not Supported 00:26:56.873 Persistent Event Log Pages: Not Supported 00:26:56.873 Supported Log Pages Log Page: May Support 00:26:56.873 Commands Supported & Effects Log Page: Not Supported 00:26:56.873 Feature Identifiers & Effects Log Page:May Support 00:26:56.873 NVMe-MI Commands & Effects Log Page: May Support 00:26:56.873 Data Area 4 for Telemetry Log: Not Supported 00:26:56.873 Error Log Page Entries Supported: 128 00:26:56.873 Keep Alive: Supported 00:26:56.873 Keep Alive Granularity: 1000 ms 00:26:56.873 00:26:56.873 NVM Command Set Attributes 00:26:56.873 ========================== 00:26:56.873 Submission Queue Entry Size 00:26:56.873 Max: 64 00:26:56.873 Min: 64 00:26:56.873 Completion Queue Entry Size 00:26:56.873 Max: 16 00:26:56.873 Min: 16 00:26:56.873 Number of Namespaces: 1024 00:26:56.873 Compare Command: Not Supported 00:26:56.873 Write Uncorrectable Command: Not Supported 00:26:56.873 Dataset Management Command: Supported 00:26:56.874 Write Zeroes Command: Supported 00:26:56.874 Set Features Save Field: Not Supported 00:26:56.874 Reservations: Not Supported 00:26:56.874 Timestamp: Not Supported 00:26:56.874 Copy: Not Supported 00:26:56.874 Volatile Write Cache: Present 00:26:56.874 Atomic Write Unit (Normal): 1 00:26:56.874 Atomic Write Unit (PFail): 1 00:26:56.874 Atomic Compare & Write Unit: 1 00:26:56.874 Fused Compare & Write: Not Supported 00:26:56.874 Scatter-Gather List 00:26:56.874 SGL Command Set: Supported 00:26:56.874 SGL Keyed: Not Supported 00:26:56.874 SGL Bit Bucket Descriptor: Not Supported 00:26:56.874 SGL Metadata Pointer: Not Supported 00:26:56.874 Oversized SGL: Not Supported 00:26:56.874 SGL Metadata Address: Not Supported 00:26:56.874 SGL Offset: Supported 00:26:56.874 Transport SGL Data Block: Not Supported 00:26:56.874 Replay Protected Memory Block: Not Supported 00:26:56.874 00:26:56.874 Firmware Slot Information 00:26:56.874 ========================= 00:26:56.874 Active slot: 0 00:26:56.874 00:26:56.874 Asymmetric Namespace Access 00:26:56.874 =========================== 00:26:56.874 Change Count : 0 00:26:56.874 Number of ANA Group Descriptors : 1 00:26:56.874 ANA Group Descriptor : 0 00:26:56.874 ANA Group ID : 1 00:26:56.874 Number of NSID Values : 1 00:26:56.874 Change Count : 0 00:26:56.874 ANA State : 1 00:26:56.874 Namespace Identifier : 1 00:26:56.874 00:26:56.874 Commands Supported and Effects 00:26:56.874 ============================== 00:26:56.874 Admin Commands 00:26:56.874 -------------- 00:26:56.874 Get Log Page (02h): Supported 00:26:56.874 Identify (06h): Supported 00:26:56.874 Abort (08h): Supported 00:26:56.874 Set Features (09h): Supported 00:26:56.874 Get Features (0Ah): Supported 00:26:56.874 Asynchronous Event Request (0Ch): Supported 00:26:56.874 Keep Alive (18h): Supported 00:26:56.874 I/O Commands 00:26:56.874 ------------ 00:26:56.874 Flush (00h): Supported 00:26:56.874 Write (01h): Supported LBA-Change 00:26:56.874 Read (02h): Supported 00:26:56.874 Write Zeroes (08h): Supported LBA-Change 00:26:56.874 Dataset Management (09h): Supported 00:26:56.874 00:26:56.874 Error Log 00:26:56.874 ========= 00:26:56.874 Entry: 0 00:26:56.874 Error Count: 0x3 00:26:56.874 Submission Queue Id: 0x0 00:26:56.874 Command Id: 0x5 00:26:56.874 Phase Bit: 0 00:26:56.874 Status Code: 0x2 00:26:56.874 Status Code Type: 0x0 00:26:56.874 Do Not Retry: 1 00:26:56.874 Error Location: 0x28 00:26:56.874 LBA: 0x0 00:26:56.874 Namespace: 0x0 00:26:56.874 Vendor Log Page: 0x0 00:26:56.874 ----------- 00:26:56.874 Entry: 1 00:26:56.874 Error Count: 0x2 00:26:56.874 Submission Queue Id: 0x0 00:26:56.874 Command Id: 0x5 00:26:56.874 Phase Bit: 0 00:26:56.874 Status Code: 0x2 00:26:56.874 Status Code Type: 0x0 00:26:56.874 Do Not Retry: 1 00:26:56.874 Error Location: 0x28 00:26:56.874 LBA: 0x0 00:26:56.874 Namespace: 0x0 00:26:56.874 Vendor Log Page: 0x0 00:26:56.874 ----------- 00:26:56.874 Entry: 2 00:26:56.874 Error Count: 0x1 00:26:56.874 Submission Queue Id: 0x0 00:26:56.874 Command Id: 0x4 00:26:56.874 Phase Bit: 0 00:26:56.874 Status Code: 0x2 00:26:56.874 Status Code Type: 0x0 00:26:56.874 Do Not Retry: 1 00:26:56.874 Error Location: 0x28 00:26:56.874 LBA: 0x0 00:26:56.874 Namespace: 0x0 00:26:56.874 Vendor Log Page: 0x0 00:26:56.874 00:26:56.874 Number of Queues 00:26:56.874 ================ 00:26:56.874 Number of I/O Submission Queues: 128 00:26:56.874 Number of I/O Completion Queues: 128 00:26:56.874 00:26:56.874 ZNS Specific Controller Data 00:26:56.874 ============================ 00:26:56.874 Zone Append Size Limit: 0 00:26:56.874 00:26:56.874 00:26:56.874 Active Namespaces 00:26:56.874 ================= 00:26:56.874 get_feature(0x05) failed 00:26:56.874 Namespace ID:1 00:26:56.874 Command Set Identifier: NVM (00h) 00:26:56.874 Deallocate: Supported 00:26:56.874 Deallocated/Unwritten Error: Not Supported 00:26:56.874 Deallocated Read Value: Unknown 00:26:56.874 Deallocate in Write Zeroes: Not Supported 00:26:56.874 Deallocated Guard Field: 0xFFFF 00:26:56.874 Flush: Supported 00:26:56.874 Reservation: Not Supported 00:26:56.874 Namespace Sharing Capabilities: Multiple Controllers 00:26:56.874 Size (in LBAs): 1953525168 (931GiB) 00:26:56.874 Capacity (in LBAs): 1953525168 (931GiB) 00:26:56.874 Utilization (in LBAs): 1953525168 (931GiB) 00:26:56.874 UUID: 1ae4754e-4bc9-43d9-a6d0-bb23c17f1382 00:26:56.874 Thin Provisioning: Not Supported 00:26:56.874 Per-NS Atomic Units: Yes 00:26:56.874 Atomic Boundary Size (Normal): 0 00:26:56.874 Atomic Boundary Size (PFail): 0 00:26:56.874 Atomic Boundary Offset: 0 00:26:56.874 NGUID/EUI64 Never Reused: No 00:26:56.874 ANA group ID: 1 00:26:56.874 Namespace Write Protected: No 00:26:56.874 Number of LBA Formats: 1 00:26:56.874 Current LBA Format: LBA Format #00 00:26:56.874 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:56.874 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.874 rmmod nvme_tcp 00:26:56.874 rmmod nvme_fabrics 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.874 09:03:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:58.775 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:59.034 09:03:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:00.408 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:00.408 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:00.408 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:00.408 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:00.408 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:00.408 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:00.408 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:00.408 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:00.408 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:00.408 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:00.409 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:00.409 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:00.409 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:00.409 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:00.409 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:00.409 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:01.344 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:01.344 00:27:01.344 real 0m9.902s 00:27:01.344 user 0m2.102s 00:27:01.344 sys 0m3.723s 00:27:01.344 09:03:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:01.344 09:03:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:01.344 ************************************ 00:27:01.344 END TEST nvmf_identify_kernel_target 00:27:01.344 ************************************ 00:27:01.344 09:03:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:01.344 09:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:01.344 09:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:01.344 09:03:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.344 ************************************ 00:27:01.345 START TEST nvmf_auth_host 00:27:01.345 ************************************ 00:27:01.345 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:01.604 * Looking for test storage... 00:27:01.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # lcov --version 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:27:01.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.604 --rc genhtml_branch_coverage=1 00:27:01.604 --rc genhtml_function_coverage=1 00:27:01.604 --rc genhtml_legend=1 00:27:01.604 --rc geninfo_all_blocks=1 00:27:01.604 --rc geninfo_unexecuted_blocks=1 00:27:01.604 00:27:01.604 ' 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:27:01.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.604 --rc genhtml_branch_coverage=1 00:27:01.604 --rc genhtml_function_coverage=1 00:27:01.604 --rc genhtml_legend=1 00:27:01.604 --rc geninfo_all_blocks=1 00:27:01.604 --rc geninfo_unexecuted_blocks=1 00:27:01.604 00:27:01.604 ' 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:27:01.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.604 --rc genhtml_branch_coverage=1 00:27:01.604 --rc genhtml_function_coverage=1 00:27:01.604 --rc genhtml_legend=1 00:27:01.604 --rc geninfo_all_blocks=1 00:27:01.604 --rc geninfo_unexecuted_blocks=1 00:27:01.604 00:27:01.604 ' 00:27:01.604 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:27:01.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.604 --rc genhtml_branch_coverage=1 00:27:01.604 --rc genhtml_function_coverage=1 00:27:01.604 --rc genhtml_legend=1 00:27:01.604 --rc geninfo_all_blocks=1 00:27:01.604 --rc geninfo_unexecuted_blocks=1 00:27:01.604 00:27:01.604 ' 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:01.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:01.605 09:03:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:04.135 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:04.135 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:04.135 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:04.136 Found net devices under 0000:09:00.0: cvl_0_0 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:04.136 Found net devices under 0000:09:00.1: cvl_0_1 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.136 09:03:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:04.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:27:04.136 00:27:04.136 --- 10.0.0.2 ping statistics --- 00:27:04.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.136 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:27:04.136 00:27:04.136 --- 10.0.0.1 ping statistics --- 00:27:04.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.136 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=919578 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 919578 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 919578 ']' 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0ab26c4ae076d1ea8ba297f8dde1f0a8 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.r2o 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0ab26c4ae076d1ea8ba297f8dde1f0a8 0 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0ab26c4ae076d1ea8ba297f8dde1f0a8 0 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0ab26c4ae076d1ea8ba297f8dde1f0a8 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:04.136 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.r2o 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.r2o 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.r2o 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=ffc23e98094c05e44923414f9fa6bc40e6cd4782b89e6e9cb8ffbadef6a22215 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:04.394 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Gye 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key ffc23e98094c05e44923414f9fa6bc40e6cd4782b89e6e9cb8ffbadef6a22215 3 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 ffc23e98094c05e44923414f9fa6bc40e6cd4782b89e6e9cb8ffbadef6a22215 3 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=ffc23e98094c05e44923414f9fa6bc40e6cd4782b89e6e9cb8ffbadef6a22215 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Gye 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Gye 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Gye 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=86f7edb4c5c18b61414b5db5fc89ea5ad8e2f0a38bd3b52a 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.aSa 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 86f7edb4c5c18b61414b5db5fc89ea5ad8e2f0a38bd3b52a 0 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 86f7edb4c5c18b61414b5db5fc89ea5ad8e2f0a38bd3b52a 0 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=86f7edb4c5c18b61414b5db5fc89ea5ad8e2f0a38bd3b52a 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.aSa 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.aSa 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.aSa 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b085e123d686f3a41bc8ec97b396c3a446f3d82e91fb1dab 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.jTq 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b085e123d686f3a41bc8ec97b396c3a446f3d82e91fb1dab 2 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b085e123d686f3a41bc8ec97b396c3a446f3d82e91fb1dab 2 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b085e123d686f3a41bc8ec97b396c3a446f3d82e91fb1dab 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.jTq 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.jTq 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.jTq 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=c09c820535e76e04234e1bd083aa2546 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.3xq 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key c09c820535e76e04234e1bd083aa2546 1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 c09c820535e76e04234e1bd083aa2546 1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=c09c820535e76e04234e1bd083aa2546 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.3xq 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.3xq 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.3xq 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=341af4a07c82d0a533ff693a6cc1e477 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.zao 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 341af4a07c82d0a533ff693a6cc1e477 1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 341af4a07c82d0a533ff693a6cc1e477 1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=341af4a07c82d0a533ff693a6cc1e477 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.zao 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.zao 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.zao 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b06720f1fba6ea75a2d533ba44dee1e703bf5a13dddc3716 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.ggn 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b06720f1fba6ea75a2d533ba44dee1e703bf5a13dddc3716 2 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b06720f1fba6ea75a2d533ba44dee1e703bf5a13dddc3716 2 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b06720f1fba6ea75a2d533ba44dee1e703bf5a13dddc3716 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:27:04.395 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.ggn 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.ggn 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ggn 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=4748b6369d5c4e4c6239d70a617c5676 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Eki 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 4748b6369d5c4e4c6239d70a617c5676 0 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 4748b6369d5c4e4c6239d70a617c5676 0 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=4748b6369d5c4e4c6239d70a617c5676 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Eki 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Eki 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Eki 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=e74d8b3779503576892189860a521e889d6ccfe765a8b9cd7e3b8c5e9dc01af3 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.euV 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key e74d8b3779503576892189860a521e889d6ccfe765a8b9cd7e3b8c5e9dc01af3 3 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 e74d8b3779503576892189860a521e889d6ccfe765a8b9cd7e3b8c5e9dc01af3 3 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=e74d8b3779503576892189860a521e889d6ccfe765a8b9cd7e3b8c5e9dc01af3 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:27:04.653 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.euV 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.euV 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.euV 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 919578 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 919578 ']' 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:04.654 09:03:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.911 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:04.911 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:04.911 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.911 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.r2o 00:27:04.911 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Gye ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Gye 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.aSa 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.jTq ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.jTq 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.3xq 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.zao ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zao 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ggn 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Eki ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Eki 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.euV 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:04.912 09:03:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:06.286 Waiting for block devices as requested 00:27:06.286 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:06.286 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:06.286 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:06.544 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:06.544 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:06.544 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:06.544 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:06.802 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:06.802 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:07.059 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:07.059 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:07.059 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:07.059 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:07.317 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:07.317 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:07.317 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:07.574 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:07.833 09:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:27:07.833 09:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:07.833 09:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:27:07.833 09:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:27:07.833 09:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:07.833 09:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:27:07.833 09:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:27:07.833 09:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:07.833 09:03:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:07.833 No valid GPT data, bailing 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:07.833 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:08.091 00:27:08.091 Discovery Log Number of Records 2, Generation counter 2 00:27:08.091 =====Discovery Log Entry 0====== 00:27:08.091 trtype: tcp 00:27:08.091 adrfam: ipv4 00:27:08.091 subtype: current discovery subsystem 00:27:08.091 treq: not specified, sq flow control disable supported 00:27:08.091 portid: 1 00:27:08.091 trsvcid: 4420 00:27:08.091 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:08.091 traddr: 10.0.0.1 00:27:08.091 eflags: none 00:27:08.091 sectype: none 00:27:08.091 =====Discovery Log Entry 1====== 00:27:08.091 trtype: tcp 00:27:08.091 adrfam: ipv4 00:27:08.091 subtype: nvme subsystem 00:27:08.091 treq: not specified, sq flow control disable supported 00:27:08.091 portid: 1 00:27:08.091 trsvcid: 4420 00:27:08.091 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:08.091 traddr: 10.0.0.1 00:27:08.091 eflags: none 00:27:08.091 sectype: none 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.091 nvme0n1 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.091 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.350 nvme0n1 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.350 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.609 nvme0n1 00:27:08.609 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.609 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.609 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.609 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.609 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.609 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.609 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.609 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.609 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.610 09:03:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.867 nvme0n1 00:27:08.867 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.867 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.868 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.126 nvme0n1 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.126 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.127 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.384 nvme0n1 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.384 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.385 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.643 nvme0n1 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.643 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.901 nvme0n1 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.901 09:03:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:09.901 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.902 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.159 nvme0n1 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.159 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.160 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.419 nvme0n1 00:27:10.419 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.420 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.680 nvme0n1 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.680 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.681 09:03:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.939 nvme0n1 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.939 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.940 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.198 nvme0n1 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.198 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.764 nvme0n1 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.764 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.765 09:03:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.023 nvme0n1 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.023 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.281 nvme0n1 00:27:12.281 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.281 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.281 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.281 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.281 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.281 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.281 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.281 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.281 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.282 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.847 nvme0n1 00:27:12.848 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.848 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.848 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.848 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.848 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.848 09:03:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.848 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.414 nvme0n1 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.414 09:03:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.979 nvme0n1 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.979 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.545 nvme0n1 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.545 09:03:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.112 nvme0n1 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.112 09:03:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.116 nvme0n1 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.117 09:03:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.049 nvme0n1 00:27:17.049 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.049 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.049 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.049 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.049 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.049 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.050 09:03:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.983 nvme0n1 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.983 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.916 nvme0n1 00:27:18.916 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.916 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.917 09:03:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.851 nvme0n1 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.851 09:03:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.851 nvme0n1 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.851 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.852 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.110 nvme0n1 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.110 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.111 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.369 nvme0n1 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.369 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.370 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.628 nvme0n1 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.628 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.887 nvme0n1 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.887 09:03:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:20.887 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.888 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.147 nvme0n1 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.147 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 nvme0n1 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.405 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.663 nvme0n1 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.663 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.921 nvme0n1 00:27:21.921 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.921 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.921 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.921 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.921 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.921 09:03:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.921 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.922 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.180 nvme0n1 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.180 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.181 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.181 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.438 nvme0n1 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.438 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 nvme0n1 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.696 09:03:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.954 nvme0n1 00:27:22.954 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.954 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.954 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.954 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.954 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.213 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.472 nvme0n1 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.472 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.730 nvme0n1 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.730 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.731 09:03:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.297 nvme0n1 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.297 09:03:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.863 nvme0n1 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:24.863 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:24.864 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:24.864 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.864 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.864 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.430 nvme0n1 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.430 09:03:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.996 nvme0n1 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.996 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 nvme0n1 00:27:26.562 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.562 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.562 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.562 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.562 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.562 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.562 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.563 09:03:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.496 nvme0n1 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.496 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.497 09:03:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.430 nvme0n1 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.430 09:03:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.364 nvme0n1 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.364 09:03:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.298 nvme0n1 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.298 09:03:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.231 nvme0n1 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.231 nvme0n1 00:27:31.231 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.232 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.232 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.232 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.232 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.232 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.232 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.232 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.232 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.232 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.490 nvme0n1 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.490 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.491 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.748 nvme0n1 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.748 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.749 09:03:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.006 nvme0n1 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.006 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.007 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.265 nvme0n1 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.265 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.523 nvme0n1 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.523 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.781 nvme0n1 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.782 09:03:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.041 nvme0n1 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.041 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.299 nvme0n1 00:27:33.299 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.299 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.299 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.299 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.299 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.299 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.299 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.299 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.300 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.558 nvme0n1 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.558 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.559 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.817 nvme0n1 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.817 09:03:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.817 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.075 nvme0n1 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.075 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.333 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.591 nvme0n1 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.591 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.850 nvme0n1 00:27:34.850 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.850 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.850 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.850 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.850 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.850 09:03:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.850 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.108 nvme0n1 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:35.108 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.109 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.674 nvme0n1 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.674 09:03:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.241 nvme0n1 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.241 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.808 nvme0n1 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.808 09:03:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.373 nvme0n1 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:37.373 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.374 09:03:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.941 nvme0n1 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFiMjZjNGFlMDc2ZDFlYThiYTI5N2Y4ZGRlMWYwYTgnrbJ8: 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: ]] 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZjMjNlOTgwOTRjMDVlNDQ5MjM0MTRmOWZhNmJjNDBlNmNkNDc4MmI4OWU2ZTljYjhmZmJhZGVmNmEyMjIxNXuWZuE=: 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.941 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.005 nvme0n1 00:27:39.005 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.005 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.005 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.005 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.005 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.005 09:03:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.005 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 nvme0n1 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.942 09:03:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.509 nvme0n1 00:27:40.509 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.509 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.509 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.509 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.509 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.509 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjA2NzIwZjFmYmE2ZWE3NWEyZDUzM2JhNDRkZWUxZTcwM2JmNWExM2RkZGMzNzE2SuOEQw==: 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: ]] 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NDc0OGI2MzY5ZDVjNGU0YzYyMzlkNzBhNjE3YzU2NzafiQv2: 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.768 09:03:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.704 nvme0n1 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc0ZDhiMzc3OTUwMzU3Njg5MjE4OTg2MGE1MjFlODg5ZDZjY2ZlNzY1YThiOWNkN2UzYjhjNWU5ZGMwMWFmMzCmZHM=: 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.704 09:03:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.642 nvme0n1 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.642 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.642 request: 00:27:42.643 { 00:27:42.643 "name": "nvme0", 00:27:42.643 "trtype": "tcp", 00:27:42.643 "traddr": "10.0.0.1", 00:27:42.643 "adrfam": "ipv4", 00:27:42.643 "trsvcid": "4420", 00:27:42.643 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:42.643 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:42.643 "prchk_reftag": false, 00:27:42.643 "prchk_guard": false, 00:27:42.643 "hdgst": false, 00:27:42.643 "ddgst": false, 00:27:42.643 "allow_unrecognized_csi": false, 00:27:42.643 "method": "bdev_nvme_attach_controller", 00:27:42.643 "req_id": 1 00:27:42.643 } 00:27:42.643 Got JSON-RPC error response 00:27:42.643 response: 00:27:42.643 { 00:27:42.643 "code": -5, 00:27:42.643 "message": "Input/output error" 00:27:42.643 } 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.643 request: 00:27:42.643 { 00:27:42.643 "name": "nvme0", 00:27:42.643 "trtype": "tcp", 00:27:42.643 "traddr": "10.0.0.1", 00:27:42.643 "adrfam": "ipv4", 00:27:42.643 "trsvcid": "4420", 00:27:42.643 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:42.643 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:42.643 "prchk_reftag": false, 00:27:42.643 "prchk_guard": false, 00:27:42.643 "hdgst": false, 00:27:42.643 "ddgst": false, 00:27:42.643 "dhchap_key": "key2", 00:27:42.643 "allow_unrecognized_csi": false, 00:27:42.643 "method": "bdev_nvme_attach_controller", 00:27:42.643 "req_id": 1 00:27:42.643 } 00:27:42.643 Got JSON-RPC error response 00:27:42.643 response: 00:27:42.643 { 00:27:42.643 "code": -5, 00:27:42.643 "message": "Input/output error" 00:27:42.643 } 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.643 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.903 request: 00:27:42.903 { 00:27:42.903 "name": "nvme0", 00:27:42.903 "trtype": "tcp", 00:27:42.903 "traddr": "10.0.0.1", 00:27:42.903 "adrfam": "ipv4", 00:27:42.903 "trsvcid": "4420", 00:27:42.903 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:42.903 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:42.903 "prchk_reftag": false, 00:27:42.903 "prchk_guard": false, 00:27:42.903 "hdgst": false, 00:27:42.903 "ddgst": false, 00:27:42.903 "dhchap_key": "key1", 00:27:42.903 "dhchap_ctrlr_key": "ckey2", 00:27:42.903 "allow_unrecognized_csi": false, 00:27:42.903 "method": "bdev_nvme_attach_controller", 00:27:42.903 "req_id": 1 00:27:42.903 } 00:27:42.903 Got JSON-RPC error response 00:27:42.903 response: 00:27:42.903 { 00:27:42.903 "code": -5, 00:27:42.903 "message": "Input/output error" 00:27:42.903 } 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.903 09:03:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.903 nvme0n1 00:27:42.903 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.903 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:42.903 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.903 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.903 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.903 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.903 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:42.903 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.904 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.162 request: 00:27:43.162 { 00:27:43.162 "name": "nvme0", 00:27:43.162 "dhchap_key": "key1", 00:27:43.162 "dhchap_ctrlr_key": "ckey2", 00:27:43.162 "method": "bdev_nvme_set_keys", 00:27:43.162 "req_id": 1 00:27:43.162 } 00:27:43.162 Got JSON-RPC error response 00:27:43.162 response: 00:27:43.162 { 00:27:43.162 "code": -13, 00:27:43.162 "message": "Permission denied" 00:27:43.162 } 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:43.162 09:03:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:44.099 09:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.099 09:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:44.099 09:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.099 09:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.099 09:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.099 09:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:44.099 09:03:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZmN2VkYjRjNWMxOGI2MTQxNGI1ZGI1ZmM4OWVhNWFkOGUyZjBhMzhiZDNiNTJhhSB5og==: 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjA4NWUxMjNkNjg2ZjNhNDFiYzhlYzk3YjM5NmMzYTQ0NmYzZDgyZTkxZmIxZGFiWI86Lg==: 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.476 nvme0n1 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzA5YzgyMDUzNWU3NmUwNDIzNGUxYmQwODNhYTI1NDZyaAda: 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzQxYWY0YTA3YzgyZDBhNTMzZmY2OTNhNmNjMWU0NzeFx01x: 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.476 request: 00:27:45.476 { 00:27:45.476 "name": "nvme0", 00:27:45.476 "dhchap_key": "key2", 00:27:45.476 "dhchap_ctrlr_key": "ckey1", 00:27:45.476 "method": "bdev_nvme_set_keys", 00:27:45.476 "req_id": 1 00:27:45.476 } 00:27:45.476 Got JSON-RPC error response 00:27:45.476 response: 00:27:45.476 { 00:27:45.476 "code": -13, 00:27:45.476 "message": "Permission denied" 00:27:45.476 } 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:45.476 09:03:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:46.411 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.411 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:46.411 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.411 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.411 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:46.670 rmmod nvme_tcp 00:27:46.670 rmmod nvme_fabrics 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 919578 ']' 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 919578 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 919578 ']' 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 919578 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 919578 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 919578' 00:27:46.670 killing process with pid 919578 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 919578 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 919578 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:46.670 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:27:46.931 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:46.931 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:46.931 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.931 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.931 09:03:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:27:48.835 09:04:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:50.213 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:50.213 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:50.213 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:50.213 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:50.213 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:50.213 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:50.213 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:50.213 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:50.213 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:50.213 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:50.213 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:50.213 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:50.213 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:50.213 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:50.213 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:50.213 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:51.150 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:51.408 09:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.r2o /tmp/spdk.key-null.aSa /tmp/spdk.key-sha256.3xq /tmp/spdk.key-sha384.ggn /tmp/spdk.key-sha512.euV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:51.408 09:04:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:52.782 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:52.782 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:52.782 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:52.782 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:52.782 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:52.782 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:52.782 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:52.782 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:52.782 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:52.782 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:52.782 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:52.782 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:52.782 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:52.782 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:52.782 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:52.782 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:52.782 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:52.782 00:27:52.782 real 0m51.326s 00:27:52.782 user 0m48.487s 00:27:52.782 sys 0m6.304s 00:27:52.782 09:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:52.782 09:04:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.782 ************************************ 00:27:52.782 END TEST nvmf_auth_host 00:27:52.782 ************************************ 00:27:52.782 09:04:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:52.782 09:04:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:52.782 09:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:52.782 09:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:52.782 09:04:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.782 ************************************ 00:27:52.782 START TEST nvmf_digest 00:27:52.782 ************************************ 00:27:52.782 09:04:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:52.782 * Looking for test storage... 00:27:52.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:52.782 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:27:52.782 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1689 -- # lcov --version 00:27:52.782 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:27:53.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.040 --rc genhtml_branch_coverage=1 00:27:53.040 --rc genhtml_function_coverage=1 00:27:53.040 --rc genhtml_legend=1 00:27:53.040 --rc geninfo_all_blocks=1 00:27:53.040 --rc geninfo_unexecuted_blocks=1 00:27:53.040 00:27:53.040 ' 00:27:53.040 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:27:53.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.041 --rc genhtml_branch_coverage=1 00:27:53.041 --rc genhtml_function_coverage=1 00:27:53.041 --rc genhtml_legend=1 00:27:53.041 --rc geninfo_all_blocks=1 00:27:53.041 --rc geninfo_unexecuted_blocks=1 00:27:53.041 00:27:53.041 ' 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:27:53.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.041 --rc genhtml_branch_coverage=1 00:27:53.041 --rc genhtml_function_coverage=1 00:27:53.041 --rc genhtml_legend=1 00:27:53.041 --rc geninfo_all_blocks=1 00:27:53.041 --rc geninfo_unexecuted_blocks=1 00:27:53.041 00:27:53.041 ' 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:27:53.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.041 --rc genhtml_branch_coverage=1 00:27:53.041 --rc genhtml_function_coverage=1 00:27:53.041 --rc genhtml_legend=1 00:27:53.041 --rc geninfo_all_blocks=1 00:27:53.041 --rc geninfo_unexecuted_blocks=1 00:27:53.041 00:27:53.041 ' 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:53.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:53.041 09:04:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.572 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:55.573 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:55.573 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:55.573 Found net devices under 0000:09:00.0: cvl_0_0 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:55.573 Found net devices under 0000:09:00.1: cvl_0_1 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:55.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:27:55.573 00:27:55.573 --- 10.0.0.2 ping statistics --- 00:27:55.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.573 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:27:55.573 00:27:55.573 --- 10.0.0.1 ping statistics --- 00:27:55.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.573 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:55.573 ************************************ 00:27:55.573 START TEST nvmf_digest_clean 00:27:55.573 ************************************ 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:55.573 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=929185 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 929185 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 929185 ']' 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.574 [2024-11-06 09:04:08.533063] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:27:55.574 [2024-11-06 09:04:08.533154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.574 [2024-11-06 09:04:08.604174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.574 [2024-11-06 09:04:08.660281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.574 [2024-11-06 09:04:08.660327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.574 [2024-11-06 09:04:08.660355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.574 [2024-11-06 09:04:08.660367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.574 [2024-11-06 09:04:08.660377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.574 [2024-11-06 09:04:08.661013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.574 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.832 null0 00:27:55.832 [2024-11-06 09:04:08.900285] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.832 [2024-11-06 09:04:08.924476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=929238 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 929238 /var/tmp/bperf.sock 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:55.832 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 929238 ']' 00:27:55.833 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:55.833 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:55.833 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:55.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:55.833 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:55.833 09:04:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:55.833 [2024-11-06 09:04:08.981671] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:27:55.833 [2024-11-06 09:04:08.981753] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929238 ] 00:27:55.833 [2024-11-06 09:04:09.052445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.833 [2024-11-06 09:04:09.111317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.090 09:04:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:56.090 09:04:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:56.090 09:04:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:56.090 09:04:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:56.090 09:04:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:56.349 09:04:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.349 09:04:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:56.913 nvme0n1 00:27:56.913 09:04:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:56.913 09:04:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:56.913 Running I/O for 2 seconds... 00:27:59.221 18882.00 IOPS, 73.76 MiB/s [2024-11-06T08:04:12.510Z] 18797.00 IOPS, 73.43 MiB/s 00:27:59.221 Latency(us) 00:27:59.221 [2024-11-06T08:04:12.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.221 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:59.221 nvme0n1 : 2.05 18437.10 72.02 0.00 0.00 6793.98 3640.89 46797.56 00:27:59.221 [2024-11-06T08:04:12.510Z] =================================================================================================================== 00:27:59.221 [2024-11-06T08:04:12.510Z] Total : 18437.10 72.02 0.00 0.00 6793.98 3640.89 46797.56 00:27:59.221 { 00:27:59.221 "results": [ 00:27:59.221 { 00:27:59.221 "job": "nvme0n1", 00:27:59.221 "core_mask": "0x2", 00:27:59.221 "workload": "randread", 00:27:59.221 "status": "finished", 00:27:59.221 "queue_depth": 128, 00:27:59.221 "io_size": 4096, 00:27:59.221 "runtime": 2.045983, 00:27:59.221 "iops": 18437.10333859079, 00:27:59.221 "mibps": 72.01993491637027, 00:27:59.221 "io_failed": 0, 00:27:59.221 "io_timeout": 0, 00:27:59.221 "avg_latency_us": 6793.9753670812, 00:27:59.221 "min_latency_us": 3640.8888888888887, 00:27:59.221 "max_latency_us": 46797.55851851852 00:27:59.221 } 00:27:59.221 ], 00:27:59.221 "core_count": 1 00:27:59.221 } 00:27:59.221 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:59.221 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:59.221 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:59.221 | select(.opcode=="crc32c") 00:27:59.221 | "\(.module_name) \(.executed)"' 00:27:59.221 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:59.221 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 929238 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 929238 ']' 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 929238 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929238 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929238' 00:27:59.479 killing process with pid 929238 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 929238 00:27:59.479 Received shutdown signal, test time was about 2.000000 seconds 00:27:59.479 00:27:59.479 Latency(us) 00:27:59.479 [2024-11-06T08:04:12.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.479 [2024-11-06T08:04:12.768Z] =================================================================================================================== 00:27:59.479 [2024-11-06T08:04:12.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.479 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 929238 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=929740 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 929740 /var/tmp/bperf.sock 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 929740 ']' 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:59.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.737 09:04:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:59.737 [2024-11-06 09:04:12.848054] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:27:59.737 [2024-11-06 09:04:12.848146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid929740 ] 00:27:59.737 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:59.737 Zero copy mechanism will not be used. 00:27:59.737 [2024-11-06 09:04:12.913105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.737 [2024-11-06 09:04:12.971868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.995 09:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.995 09:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:59.995 09:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:59.995 09:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:59.995 09:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:00.253 09:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.253 09:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:00.819 nvme0n1 00:28:00.819 09:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:00.819 09:04:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:00.819 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:00.819 Zero copy mechanism will not be used. 00:28:00.819 Running I/O for 2 seconds... 00:28:03.125 5966.00 IOPS, 745.75 MiB/s [2024-11-06T08:04:16.414Z] 5812.50 IOPS, 726.56 MiB/s 00:28:03.125 Latency(us) 00:28:03.125 [2024-11-06T08:04:16.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.125 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:03.125 nvme0n1 : 2.00 5812.69 726.59 0.00 0.00 2748.25 713.01 5145.79 00:28:03.125 [2024-11-06T08:04:16.414Z] =================================================================================================================== 00:28:03.125 [2024-11-06T08:04:16.414Z] Total : 5812.69 726.59 0.00 0.00 2748.25 713.01 5145.79 00:28:03.125 { 00:28:03.125 "results": [ 00:28:03.125 { 00:28:03.125 "job": "nvme0n1", 00:28:03.125 "core_mask": "0x2", 00:28:03.125 "workload": "randread", 00:28:03.125 "status": "finished", 00:28:03.125 "queue_depth": 16, 00:28:03.125 "io_size": 131072, 00:28:03.125 "runtime": 2.002688, 00:28:03.125 "iops": 5812.687747667135, 00:28:03.125 "mibps": 726.5859684583919, 00:28:03.125 "io_failed": 0, 00:28:03.125 "io_timeout": 0, 00:28:03.125 "avg_latency_us": 2748.248188936295, 00:28:03.125 "min_latency_us": 713.0074074074074, 00:28:03.125 "max_latency_us": 5145.789629629629 00:28:03.125 } 00:28:03.125 ], 00:28:03.125 "core_count": 1 00:28:03.125 } 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:03.125 | select(.opcode=="crc32c") 00:28:03.125 | "\(.module_name) \(.executed)"' 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 929740 00:28:03.125 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 929740 ']' 00:28:03.126 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 929740 00:28:03.126 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:03.126 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:03.126 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929740 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929740' 00:28:03.384 killing process with pid 929740 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 929740 00:28:03.384 Received shutdown signal, test time was about 2.000000 seconds 00:28:03.384 00:28:03.384 Latency(us) 00:28:03.384 [2024-11-06T08:04:16.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.384 [2024-11-06T08:04:16.673Z] =================================================================================================================== 00:28:03.384 [2024-11-06T08:04:16.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 929740 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=930152 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 930152 /var/tmp/bperf.sock 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 930152 ']' 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:03.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:03.384 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:03.642 [2024-11-06 09:04:16.695645] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:03.642 [2024-11-06 09:04:16.695727] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930152 ] 00:28:03.642 [2024-11-06 09:04:16.763184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.642 [2024-11-06 09:04:16.822809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.900 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:03.900 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:03.900 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:03.900 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:03.900 09:04:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:04.157 09:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.157 09:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:04.723 nvme0n1 00:28:04.723 09:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:04.723 09:04:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:04.723 Running I/O for 2 seconds... 00:28:07.027 20748.00 IOPS, 81.05 MiB/s [2024-11-06T08:04:20.316Z] 19110.00 IOPS, 74.65 MiB/s 00:28:07.027 Latency(us) 00:28:07.027 [2024-11-06T08:04:20.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.027 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:07.027 nvme0n1 : 2.01 19108.49 74.64 0.00 0.00 6683.46 2694.26 10340.12 00:28:07.027 [2024-11-06T08:04:20.316Z] =================================================================================================================== 00:28:07.027 [2024-11-06T08:04:20.316Z] Total : 19108.49 74.64 0.00 0.00 6683.46 2694.26 10340.12 00:28:07.027 { 00:28:07.027 "results": [ 00:28:07.027 { 00:28:07.027 "job": "nvme0n1", 00:28:07.027 "core_mask": "0x2", 00:28:07.027 "workload": "randwrite", 00:28:07.027 "status": "finished", 00:28:07.027 "queue_depth": 128, 00:28:07.027 "io_size": 4096, 00:28:07.027 "runtime": 2.006857, 00:28:07.027 "iops": 19108.486553850125, 00:28:07.027 "mibps": 74.64252560097705, 00:28:07.027 "io_failed": 0, 00:28:07.027 "io_timeout": 0, 00:28:07.027 "avg_latency_us": 6683.464984759453, 00:28:07.027 "min_latency_us": 2694.257777777778, 00:28:07.027 "max_latency_us": 10340.124444444444 00:28:07.027 } 00:28:07.027 ], 00:28:07.027 "core_count": 1 00:28:07.027 } 00:28:07.027 09:04:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:07.027 09:04:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:07.027 09:04:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:07.027 09:04:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:07.027 | select(.opcode=="crc32c") 00:28:07.027 | "\(.module_name) \(.executed)"' 00:28:07.027 09:04:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 930152 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 930152 ']' 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 930152 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930152 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930152' 00:28:07.027 killing process with pid 930152 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 930152 00:28:07.027 Received shutdown signal, test time was about 2.000000 seconds 00:28:07.027 00:28:07.027 Latency(us) 00:28:07.027 [2024-11-06T08:04:20.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.027 [2024-11-06T08:04:20.316Z] =================================================================================================================== 00:28:07.027 [2024-11-06T08:04:20.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.027 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 930152 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=930676 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 930676 /var/tmp/bperf.sock 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 930676 ']' 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:07.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:07.285 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:07.285 [2024-11-06 09:04:20.518427] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:07.285 [2024-11-06 09:04:20.518530] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid930676 ] 00:28:07.285 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.285 Zero copy mechanism will not be used. 00:28:07.543 [2024-11-06 09:04:20.586876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.543 [2024-11-06 09:04:20.647550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.543 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.543 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:07.543 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:07.543 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:07.544 09:04:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:08.109 09:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:08.109 09:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:08.367 nvme0n1 00:28:08.367 09:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:08.367 09:04:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:08.367 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:08.367 Zero copy mechanism will not be used. 00:28:08.367 Running I/O for 2 seconds... 00:28:10.673 5766.00 IOPS, 720.75 MiB/s [2024-11-06T08:04:23.962Z] 5908.50 IOPS, 738.56 MiB/s 00:28:10.673 Latency(us) 00:28:10.673 [2024-11-06T08:04:23.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.673 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:10.673 nvme0n1 : 2.00 5907.86 738.48 0.00 0.00 2700.58 1723.35 9369.22 00:28:10.673 [2024-11-06T08:04:23.962Z] =================================================================================================================== 00:28:10.673 [2024-11-06T08:04:23.962Z] Total : 5907.86 738.48 0.00 0.00 2700.58 1723.35 9369.22 00:28:10.673 { 00:28:10.673 "results": [ 00:28:10.673 { 00:28:10.674 "job": "nvme0n1", 00:28:10.674 "core_mask": "0x2", 00:28:10.674 "workload": "randwrite", 00:28:10.674 "status": "finished", 00:28:10.674 "queue_depth": 16, 00:28:10.674 "io_size": 131072, 00:28:10.674 "runtime": 2.00377, 00:28:10.674 "iops": 5907.863676968914, 00:28:10.674 "mibps": 738.4829596211142, 00:28:10.674 "io_failed": 0, 00:28:10.674 "io_timeout": 0, 00:28:10.674 "avg_latency_us": 2700.5755616877223, 00:28:10.674 "min_latency_us": 1723.354074074074, 00:28:10.674 "max_latency_us": 9369.22074074074 00:28:10.674 } 00:28:10.674 ], 00:28:10.674 "core_count": 1 00:28:10.674 } 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:10.674 | select(.opcode=="crc32c") 00:28:10.674 | "\(.module_name) \(.executed)"' 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 930676 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 930676 ']' 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 930676 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 930676 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 930676' 00:28:10.674 killing process with pid 930676 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 930676 00:28:10.674 Received shutdown signal, test time was about 2.000000 seconds 00:28:10.674 00:28:10.674 Latency(us) 00:28:10.674 [2024-11-06T08:04:23.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.674 [2024-11-06T08:04:23.963Z] =================================================================================================================== 00:28:10.674 [2024-11-06T08:04:23.963Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.674 09:04:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 930676 00:28:10.932 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 929185 00:28:10.932 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 929185 ']' 00:28:10.932 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 929185 00:28:10.932 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:10.932 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.932 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 929185 00:28:10.932 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 929185' 00:28:11.190 killing process with pid 929185 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 929185 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 929185 00:28:11.190 00:28:11.190 real 0m15.968s 00:28:11.190 user 0m31.878s 00:28:11.190 sys 0m4.432s 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:11.190 ************************************ 00:28:11.190 END TEST nvmf_digest_clean 00:28:11.190 ************************************ 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:11.190 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:11.449 ************************************ 00:28:11.449 START TEST nvmf_digest_error 00:28:11.449 ************************************ 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=931111 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 931111 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 931111 ']' 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.449 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.449 [2024-11-06 09:04:24.554259] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:11.449 [2024-11-06 09:04:24.554340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.449 [2024-11-06 09:04:24.625636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.449 [2024-11-06 09:04:24.681666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.449 [2024-11-06 09:04:24.681722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.449 [2024-11-06 09:04:24.681751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.449 [2024-11-06 09:04:24.681763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.449 [2024-11-06 09:04:24.681772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.449 [2024-11-06 09:04:24.682393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.708 [2024-11-06 09:04:24.811129] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.708 null0 00:28:11.708 [2024-11-06 09:04:24.925984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.708 [2024-11-06 09:04:24.950224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=931256 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 931256 /var/tmp/bperf.sock 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 931256 ']' 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:11.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:11.708 09:04:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.966 [2024-11-06 09:04:24.997953] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:11.966 [2024-11-06 09:04:24.998039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931256 ] 00:28:11.966 [2024-11-06 09:04:25.062253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.966 [2024-11-06 09:04:25.118946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.966 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:11.966 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:11.966 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:11.966 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:12.224 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:12.224 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.224 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.224 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.224 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.224 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:12.789 nvme0n1 00:28:12.790 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:12.790 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.790 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:12.790 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.790 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:12.790 09:04:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:12.790 Running I/O for 2 seconds... 00:28:12.790 [2024-11-06 09:04:25.955908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:12.790 [2024-11-06 09:04:25.955969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.790 [2024-11-06 09:04:25.955989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.790 [2024-11-06 09:04:25.972094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:12.790 [2024-11-06 09:04:25.972148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:6736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.790 [2024-11-06 09:04:25.972170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.790 [2024-11-06 09:04:25.987718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:12.790 [2024-11-06 09:04:25.987748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:24962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.790 [2024-11-06 09:04:25.987780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.790 [2024-11-06 09:04:26.003444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:12.790 [2024-11-06 09:04:26.003474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.790 [2024-11-06 09:04:26.003504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.790 [2024-11-06 09:04:26.020027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:12.790 [2024-11-06 09:04:26.020057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.790 [2024-11-06 09:04:26.020088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.790 [2024-11-06 09:04:26.034693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:12.790 [2024-11-06 09:04:26.034725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.790 [2024-11-06 09:04:26.034742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.790 [2024-11-06 09:04:26.046039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:12.790 [2024-11-06 09:04:26.046070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.790 [2024-11-06 09:04:26.046103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.790 [2024-11-06 09:04:26.059996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:12.790 [2024-11-06 09:04:26.060028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.790 [2024-11-06 09:04:26.060045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.790 [2024-11-06 09:04:26.075266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:12.790 [2024-11-06 09:04:26.075298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.790 [2024-11-06 09:04:26.075315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.089478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.089521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.089545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.102229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.102274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.102291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.113876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.113904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.113935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.128745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.128773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.128803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.141301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.141330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.141361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.155524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.155565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.155582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.169589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.169621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.169638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.184660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.184690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.184708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.196826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.196879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.196897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.211504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.211539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.211572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.225779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.225810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.049 [2024-11-06 09:04:26.225853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.049 [2024-11-06 09:04:26.241057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.049 [2024-11-06 09:04:26.241089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.050 [2024-11-06 09:04:26.241107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.050 [2024-11-06 09:04:26.251548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.050 [2024-11-06 09:04:26.251577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.050 [2024-11-06 09:04:26.251607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.050 [2024-11-06 09:04:26.265939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.050 [2024-11-06 09:04:26.265969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.050 [2024-11-06 09:04:26.266000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.050 [2024-11-06 09:04:26.279330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.050 [2024-11-06 09:04:26.279374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.050 [2024-11-06 09:04:26.279390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.050 [2024-11-06 09:04:26.291251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.050 [2024-11-06 09:04:26.291278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.050 [2024-11-06 09:04:26.291309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.050 [2024-11-06 09:04:26.303772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.050 [2024-11-06 09:04:26.303800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.050 [2024-11-06 09:04:26.303837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.050 [2024-11-06 09:04:26.316729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.050 [2024-11-06 09:04:26.316757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.050 [2024-11-06 09:04:26.316788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.050 [2024-11-06 09:04:26.330941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.050 [2024-11-06 09:04:26.330972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.050 [2024-11-06 09:04:26.330989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.345749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.345782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.345800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.356925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.356954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.356985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.372063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.372094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.372112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.385270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.385297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.385326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.399907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.399938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.399954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.414484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.414516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.414533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.426596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.426639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.426655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.439173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.439204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.439227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.451391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.451418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.451448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.463329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.463358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.463375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.478252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.478280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.478312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.490400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.490428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.490459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.503389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.503416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.503447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.518065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.518094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.518111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.531035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.531064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.531096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.547694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.547738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.547754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.560919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.560952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.560984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.576947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.576976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.577007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.309 [2024-11-06 09:04:26.590228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.309 [2024-11-06 09:04:26.590258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.309 [2024-11-06 09:04:26.590290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.568 [2024-11-06 09:04:26.603498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.568 [2024-11-06 09:04:26.603529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.568 [2024-11-06 09:04:26.603546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.568 [2024-11-06 09:04:26.618655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.568 [2024-11-06 09:04:26.618686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.568 [2024-11-06 09:04:26.618704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.568 [2024-11-06 09:04:26.630250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.568 [2024-11-06 09:04:26.630277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.568 [2024-11-06 09:04:26.630307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.568 [2024-11-06 09:04:26.645198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.568 [2024-11-06 09:04:26.645234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.568 [2024-11-06 09:04:26.645265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.568 [2024-11-06 09:04:26.659937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.568 [2024-11-06 09:04:26.659966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.568 [2024-11-06 09:04:26.659997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.568 [2024-11-06 09:04:26.673886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.568 [2024-11-06 09:04:26.673914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.568 [2024-11-06 09:04:26.673946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.568 [2024-11-06 09:04:26.686091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.568 [2024-11-06 09:04:26.686122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.568 [2024-11-06 09:04:26.686139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.568 [2024-11-06 09:04:26.699667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.699695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.699725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.713560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.713588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.713618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.726942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.726972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.726989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.739185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.739215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.739246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.754049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.754079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.754097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.768999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.769044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.769061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.783655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.783698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.783713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.794744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.794778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.794810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.809607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.809636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.809669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.821720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.821749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.821779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.836468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.836498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.836528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.569 [2024-11-06 09:04:26.851333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.569 [2024-11-06 09:04:26.851365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.569 [2024-11-06 09:04:26.851383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-11-06 09:04:26.862822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.862875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.862894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-11-06 09:04:26.875489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.875535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.875552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-11-06 09:04:26.890381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.890409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.890440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-11-06 09:04:26.905773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.905806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.905823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-11-06 09:04:26.917143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.917188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.917204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-11-06 09:04:26.933527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.933556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.933587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 18439.00 IOPS, 72.03 MiB/s [2024-11-06T08:04:27.117Z] [2024-11-06 09:04:26.948403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.948433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.948465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-11-06 09:04:26.961860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.961905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.961923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-11-06 09:04:26.972955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.972999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.973017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.828 [2024-11-06 09:04:26.987616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.828 [2024-11-06 09:04:26.987647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.828 [2024-11-06 09:04:26.987665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.829 [2024-11-06 09:04:26.998759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.829 [2024-11-06 09:04:26.998801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.829 [2024-11-06 09:04:26.998816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.829 [2024-11-06 09:04:27.012509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.829 [2024-11-06 09:04:27.012537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.829 [2024-11-06 09:04:27.012567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.829 [2024-11-06 09:04:27.028104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.829 [2024-11-06 09:04:27.028151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.829 [2024-11-06 09:04:27.028173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.829 [2024-11-06 09:04:27.040426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.829 [2024-11-06 09:04:27.040454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.829 [2024-11-06 09:04:27.040486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.829 [2024-11-06 09:04:27.057771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.829 [2024-11-06 09:04:27.057802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.829 [2024-11-06 09:04:27.057820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.829 [2024-11-06 09:04:27.068643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.829 [2024-11-06 09:04:27.068672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.829 [2024-11-06 09:04:27.068703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.829 [2024-11-06 09:04:27.082002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.829 [2024-11-06 09:04:27.082034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.829 [2024-11-06 09:04:27.082052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.829 [2024-11-06 09:04:27.097889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.829 [2024-11-06 09:04:27.097921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.829 [2024-11-06 09:04:27.097938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.829 [2024-11-06 09:04:27.112927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:13.829 [2024-11-06 09:04:27.112959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.829 [2024-11-06 09:04:27.112977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.128342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.128372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.128404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.143399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.143448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.143465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.153811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.153868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.153885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.169902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.169949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.169967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.185181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.185209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.185241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.201314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.201343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.201374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.213903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.213932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.213964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.226525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.226553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.226584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.240246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.240277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.240295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.255676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.255707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.255724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.267724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.267752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.267787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.281111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.281142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.281159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.295785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.295827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.295854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.309361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.309388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.309419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.324001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.324029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.324060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.336949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.336979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.336996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.350771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.350799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.350830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.088 [2024-11-06 09:04:27.365344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.088 [2024-11-06 09:04:27.365372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.088 [2024-11-06 09:04:27.365403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.379680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.379711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.379728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.391896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.391931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.391963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.405176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.405207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.405225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.417144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.417186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.417202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.429707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.429736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.429768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.442850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.442879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.442911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.458219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.458250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.458267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.472894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.472925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.472942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.484746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.484773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.484803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.347 [2024-11-06 09:04:27.497849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.347 [2024-11-06 09:04:27.497893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.347 [2024-11-06 09:04:27.497910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.348 [2024-11-06 09:04:27.511283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.348 [2024-11-06 09:04:27.511314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.348 [2024-11-06 09:04:27.511331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.348 [2024-11-06 09:04:27.526679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.348 [2024-11-06 09:04:27.526707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.348 [2024-11-06 09:04:27.526737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.348 [2024-11-06 09:04:27.536836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.348 [2024-11-06 09:04:27.536864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.348 [2024-11-06 09:04:27.536894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.348 [2024-11-06 09:04:27.552325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.348 [2024-11-06 09:04:27.552353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.348 [2024-11-06 09:04:27.552383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.348 [2024-11-06 09:04:27.567545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.348 [2024-11-06 09:04:27.567575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.348 [2024-11-06 09:04:27.567593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.348 [2024-11-06 09:04:27.577908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.348 [2024-11-06 09:04:27.577938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.348 [2024-11-06 09:04:27.577955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.348 [2024-11-06 09:04:27.592492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.348 [2024-11-06 09:04:27.592520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.348 [2024-11-06 09:04:27.592549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.348 [2024-11-06 09:04:27.608488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.348 [2024-11-06 09:04:27.608515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.348 [2024-11-06 09:04:27.608546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.348 [2024-11-06 09:04:27.624283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.348 [2024-11-06 09:04:27.624311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.348 [2024-11-06 09:04:27.624348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.607 [2024-11-06 09:04:27.639630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.607 [2024-11-06 09:04:27.639659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.639690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.653659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.653690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.653707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.664738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.664766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.664795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.678157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.678188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.678205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.691152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.691194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.691209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.705378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.705405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.705435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.719293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.719321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.719350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.733608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.733635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.733666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.748536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.748586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.748602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.760433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.760461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.760492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.774653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.774681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.774711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.788109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.788153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.788169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.802970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.803003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.803020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.817163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.817207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.817224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.834006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.834048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.834066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.845408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.845437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.845468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.860413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.860444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.860462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.873517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.873548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.873566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.608 [2024-11-06 09:04:27.885749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.608 [2024-11-06 09:04:27.885780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.608 [2024-11-06 09:04:27.885797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.868 [2024-11-06 09:04:27.900268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.868 [2024-11-06 09:04:27.900297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.868 [2024-11-06 09:04:27.900329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.868 [2024-11-06 09:04:27.913202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.868 [2024-11-06 09:04:27.913231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.868 [2024-11-06 09:04:27.913261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.868 [2024-11-06 09:04:27.927915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.868 [2024-11-06 09:04:27.927946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.868 [2024-11-06 09:04:27.927963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.868 18462.00 IOPS, 72.12 MiB/s [2024-11-06T08:04:28.157Z] [2024-11-06 09:04:27.944504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1cfa6f0) 00:28:14.868 [2024-11-06 09:04:27.944533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:14.868 [2024-11-06 09:04:27.944550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:14.868 00:28:14.868 Latency(us) 00:28:14.868 [2024-11-06T08:04:28.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.868 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:14.868 nvme0n1 : 2.01 18445.72 72.05 0.00 0.00 6931.20 3373.89 21942.42 00:28:14.868 [2024-11-06T08:04:28.157Z] =================================================================================================================== 00:28:14.868 [2024-11-06T08:04:28.157Z] Total : 18445.72 72.05 0.00 0.00 6931.20 3373.89 21942.42 00:28:14.868 { 00:28:14.868 "results": [ 00:28:14.868 { 00:28:14.868 "job": "nvme0n1", 00:28:14.868 "core_mask": "0x2", 00:28:14.868 "workload": "randread", 00:28:14.868 "status": "finished", 00:28:14.868 "queue_depth": 128, 00:28:14.868 "io_size": 4096, 00:28:14.868 "runtime": 2.008705, 00:28:14.868 "iops": 18445.715025352154, 00:28:14.868 "mibps": 72.05357431778185, 00:28:14.868 "io_failed": 0, 00:28:14.868 "io_timeout": 0, 00:28:14.868 "avg_latency_us": 6931.199902759286, 00:28:14.868 "min_latency_us": 3373.8903703703704, 00:28:14.868 "max_latency_us": 21942.423703703702 00:28:14.868 } 00:28:14.868 ], 00:28:14.868 "core_count": 1 00:28:14.868 } 00:28:14.868 09:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:14.868 09:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:14.868 09:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:14.868 | .driver_specific 00:28:14.868 | .nvme_error 00:28:14.868 | .status_code 00:28:14.868 | .command_transient_transport_error' 00:28:14.868 09:04:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:15.126 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 931256 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 931256 ']' 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 931256 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931256 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931256' 00:28:15.127 killing process with pid 931256 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 931256 00:28:15.127 Received shutdown signal, test time was about 2.000000 seconds 00:28:15.127 00:28:15.127 Latency(us) 00:28:15.127 [2024-11-06T08:04:28.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.127 [2024-11-06T08:04:28.416Z] =================================================================================================================== 00:28:15.127 [2024-11-06T08:04:28.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:15.127 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 931256 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=931663 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 931663 /var/tmp/bperf.sock 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 931663 ']' 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.386 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.386 [2024-11-06 09:04:28.548496] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:15.386 [2024-11-06 09:04:28.548580] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid931663 ] 00:28:15.386 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:15.386 Zero copy mechanism will not be used. 00:28:15.386 [2024-11-06 09:04:28.613781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.386 [2024-11-06 09:04:28.671668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.645 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.645 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:15.645 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:15.645 09:04:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:15.903 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:15.903 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.903 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:15.903 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.903 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.903 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.478 nvme0n1 00:28:16.478 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:16.478 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.478 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:16.478 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.478 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:16.478 09:04:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:16.478 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:16.478 Zero copy mechanism will not be used. 00:28:16.478 Running I/O for 2 seconds... 00:28:16.478 [2024-11-06 09:04:29.676072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.478 [2024-11-06 09:04:29.676141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.478 [2024-11-06 09:04:29.676164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.478 [2024-11-06 09:04:29.682792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.478 [2024-11-06 09:04:29.682841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.478 [2024-11-06 09:04:29.682878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.478 [2024-11-06 09:04:29.689943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.478 [2024-11-06 09:04:29.689977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.478 [2024-11-06 09:04:29.689995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.478 [2024-11-06 09:04:29.696891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.478 [2024-11-06 09:04:29.696922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.696955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.479 [2024-11-06 09:04:29.703764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.479 [2024-11-06 09:04:29.703808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.703826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.479 [2024-11-06 09:04:29.710697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.479 [2024-11-06 09:04:29.710742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.710760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.479 [2024-11-06 09:04:29.718220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.479 [2024-11-06 09:04:29.718251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.718285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.479 [2024-11-06 09:04:29.725283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.479 [2024-11-06 09:04:29.725314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.725345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.479 [2024-11-06 09:04:29.732270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.479 [2024-11-06 09:04:29.732300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.732333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.479 [2024-11-06 09:04:29.739457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.479 [2024-11-06 09:04:29.739487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.739519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.479 [2024-11-06 09:04:29.746644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.479 [2024-11-06 09:04:29.746675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.746707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.479 [2024-11-06 09:04:29.753838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.479 [2024-11-06 09:04:29.753870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.753903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.479 [2024-11-06 09:04:29.761340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.479 [2024-11-06 09:04:29.761370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.479 [2024-11-06 09:04:29.761403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.768282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.768312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.768343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.775014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.775045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.775077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.781523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.781552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.781584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.787184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.787225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.787258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.792685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.792715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.792748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.798243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.798273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.798312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.803747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.803781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.803814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.809182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.809212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.809246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.815035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.815068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.815086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.821156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.821203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.821221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.828646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.828679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.828712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.836340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.836372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.836405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.843675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.843724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.843742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.849519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.849551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.849569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.855681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.855734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.855754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.742 [2024-11-06 09:04:29.862578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.742 [2024-11-06 09:04:29.862610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.742 [2024-11-06 09:04:29.862644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.869076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.869109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.869144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.876051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.876083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.876116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.881806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.881846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.881867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.887329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.887377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.887396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.893097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.893144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.893162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.898838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.898871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.898889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.904660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.904690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.904722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.910464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.910496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.910530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.916074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.916106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.916124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.921448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.921481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.921499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.926993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.927025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.927043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.932448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.932480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.932498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.937830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.937870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.937888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.943391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.943422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.943455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.949149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.949182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.949215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.954719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.954758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.954791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.961452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.961483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.961516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.967844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.967877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.967896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.973989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.974021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.974055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.980586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.980618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.980651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.987517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.987549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.987581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:29.994082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:29.994129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:29.994147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:30.001065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:30.001105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:30.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:30.006933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.743 [2024-11-06 09:04:30.006969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.743 [2024-11-06 09:04:30.006988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:16.743 [2024-11-06 09:04:30.012272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.744 [2024-11-06 09:04:30.012306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.744 [2024-11-06 09:04:30.012340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:16.744 [2024-11-06 09:04:30.018610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.744 [2024-11-06 09:04:30.018653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.744 [2024-11-06 09:04:30.018672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:16.744 [2024-11-06 09:04:30.025981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:16.744 [2024-11-06 09:04:30.026016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.744 [2024-11-06 09:04:30.026035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.003 [2024-11-06 09:04:30.033494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.003 [2024-11-06 09:04:30.033532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.033567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.041091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.041127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.041146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.047550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.047598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.047617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.054387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.054418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.054449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.062078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.062112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.062130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.069699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.069734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.069777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.077277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.077327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.077346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.084812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.084861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.084880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.092988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.093022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.093041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.101397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.101446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.101464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.109446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.109480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.109499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.117138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.117171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.117205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.125444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.125478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.125496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.133513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.133562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.133581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.141203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.141259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.141278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.149120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.149154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.149172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.156909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.156942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.156960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.164868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.164902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.164920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.172776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.172809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.172828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.180627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.180675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.180693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.188520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.188553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.188573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.196489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.196537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.196555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.204663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.204697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.204715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.211518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.211567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.211584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.004 [2024-11-06 09:04:30.217347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.004 [2024-11-06 09:04:30.217395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.004 [2024-11-06 09:04:30.217412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.223219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.223266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.223284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.228973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.229006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.229024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.234771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.234804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.234823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.240555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.240588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.240606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.246496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.246529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.246547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.252602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.252635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.252654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.258540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.258573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.258597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.265657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.265690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.265708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.273098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.273147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.273165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.279789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.279821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.279849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.286672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.286705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.286723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.005 [2024-11-06 09:04:30.290928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.005 [2024-11-06 09:04:30.290975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.005 [2024-11-06 09:04:30.290995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.295639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.295668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.295701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.301380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.301409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.301441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.307301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.307333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.307366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.313073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.313112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.313131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.318825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.318866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.318885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.324418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.324449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.324482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.330215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.330262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.330279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.336032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.336064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.336083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.341702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.341733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.341768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.347287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.347363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.347383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.352502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.352549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.352567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.357952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.357984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.358002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.363308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.363355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.363372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.368809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.368850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.368870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.374691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.374740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.374758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.381658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.381690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.381722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.387788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.387842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.387862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.393535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.393582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.393599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.398904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.398937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.265 [2024-11-06 09:04:30.398955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.265 [2024-11-06 09:04:30.404966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.265 [2024-11-06 09:04:30.404999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.405018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.412077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.412111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.412135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.417918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.417952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.417970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.424120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.424153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.424171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.430273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.430306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.430324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.436625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.436658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.436676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.444099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.444133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.444152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.452055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.452088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.452106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.459647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.459680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.459698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.465630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.465678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.465697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.471377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.471416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.471435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.477359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.477391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.477409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.483500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.483533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.483550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.490560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.490592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.490611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.497775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.497809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.497827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.504452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.504486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.504505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.510366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.510406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.510425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.516500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.516533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.516551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.524032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.524066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.524084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.531955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.531989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.532008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.537141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.537175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.537215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.542573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.542622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.542639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.266 [2024-11-06 09:04:30.548563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.266 [2024-11-06 09:04:30.548595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.266 [2024-11-06 09:04:30.548612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.554800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.554854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.554875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.560201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.560233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.560266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.565605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.565637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.565669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.571421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.571454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.571487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.578338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.578371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.578400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.586019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.586053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.586071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.594031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.594080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.594098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.601846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.601880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.601898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.609497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.609529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.609562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.617086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.617120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.617154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.624611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.624645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.624663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.632188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.632221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.632238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.639823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.639879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.639897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.647554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.647587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.647625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.655537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.655585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.655602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.663811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.663852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.663871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.527 4688.00 IOPS, 586.00 MiB/s [2024-11-06T08:04:30.816Z] [2024-11-06 09:04:30.672562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.672595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.672629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.680729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.680761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.680779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.689120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.689153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.527 [2024-11-06 09:04:30.689172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.527 [2024-11-06 09:04:30.696935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.527 [2024-11-06 09:04:30.696969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.696987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.705313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.705359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.705377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.713876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.713921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.713944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.721987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.722020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.722038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.729721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.729769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.729787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.737513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.737546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.737564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.745222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.745256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.745275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.752435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.752482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.752500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.758453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.758499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.758515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.765394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.765428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.765446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.772449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.772483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.772502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.778333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.778373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.778392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.784150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.784183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.784201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.789933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.789966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.789984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.795592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.795633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.795665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.802063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.802111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.802128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.808651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.808683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.808702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.528 [2024-11-06 09:04:30.814403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.528 [2024-11-06 09:04:30.814450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.528 [2024-11-06 09:04:30.814466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.820253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.788 [2024-11-06 09:04:30.820286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.788 [2024-11-06 09:04:30.820304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.826018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.788 [2024-11-06 09:04:30.826051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.788 [2024-11-06 09:04:30.826069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.832174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.788 [2024-11-06 09:04:30.832220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.788 [2024-11-06 09:04:30.832238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.838995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.788 [2024-11-06 09:04:30.839028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.788 [2024-11-06 09:04:30.839046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.846790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.788 [2024-11-06 09:04:30.846825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.788 [2024-11-06 09:04:30.846852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.852714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.788 [2024-11-06 09:04:30.852748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.788 [2024-11-06 09:04:30.852765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.858605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.788 [2024-11-06 09:04:30.858638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.788 [2024-11-06 09:04:30.858657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.864508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.788 [2024-11-06 09:04:30.864540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.788 [2024-11-06 09:04:30.864572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.870689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.788 [2024-11-06 09:04:30.870722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.788 [2024-11-06 09:04:30.870739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.788 [2024-11-06 09:04:30.877158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.877190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.877208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.884128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.884176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.884199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.890795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.890828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.890855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.896537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.896570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.896588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.902377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.902424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.902444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.908300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.908348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.908366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.913949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.913982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.914000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.919590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.919623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.919640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.925638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.925671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.925704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.931532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.931580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.931597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.937581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.937620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.937638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.943388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.943421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.943438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.949095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.949128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.949147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.952785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.952817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.952842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.958164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.958195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.958227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.964277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.964323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.964340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.971336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.971380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.971399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.977172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.977218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.977235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.983032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.983065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.983097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.988893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.988925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.988958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:30.994702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:30.994733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:30.994765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:31.000756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:31.000801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:31.000817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:31.006806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:31.006861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:31.006880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:31.012377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:31.012422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.789 [2024-11-06 09:04:31.012438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.789 [2024-11-06 09:04:31.018231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.789 [2024-11-06 09:04:31.018277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.018294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.024139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.024169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.024201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.029981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.030029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.030047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.035608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.035640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.035682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.041390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.041437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.041452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.047385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.047414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.047446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.053058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.053090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.053122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.058886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.058930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.058948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.064667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.064713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.064729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.070613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.070660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.070677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:17.790 [2024-11-06 09:04:31.076425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:17.790 [2024-11-06 09:04:31.076457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:17.790 [2024-11-06 09:04:31.076475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.082257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.082289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.082307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.087612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.087662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.087679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.092980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.093011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.093028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.098361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.098406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.098425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.103943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.103974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.104006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.109290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.109335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.109352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.114788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.114818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.114857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.120157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.120202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.120219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.125626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.125672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.125688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.131227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.131257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.131289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.137102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.137149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.137166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.144412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.144444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.144477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.150814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.150871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.150889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.156617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.156647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.156680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.161960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.161994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.162012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.167361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.050 [2024-11-06 09:04:31.167393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.050 [2024-11-06 09:04:31.167426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.050 [2024-11-06 09:04:31.172748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.172778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.172795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.178238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.178268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.178301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.184002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.184048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.184071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.189771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.189814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.189838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.195586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.195615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.195630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.201321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.201353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.201372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.207237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.207266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.207296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.213085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.213134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.213151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.219089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.219136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.219153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.224849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.224895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.224914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.230697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.230744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.230763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.236753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.236786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.236804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.242986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.243018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.243036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.250120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.250168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.250186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.257466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.257513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.257532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.263258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.263290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.263322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.269240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.269271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.269302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.274866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.274898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.274930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.281201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.281232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.281264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.288754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.288786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.288826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.294584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.294616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.294635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.300344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.300375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.300407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.306183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.306231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.306247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.311789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.311843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.311863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.051 [2024-11-06 09:04:31.317820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.051 [2024-11-06 09:04:31.317874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.051 [2024-11-06 09:04:31.317893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.052 [2024-11-06 09:04:31.325231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.052 [2024-11-06 09:04:31.325276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.052 [2024-11-06 09:04:31.325293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.052 [2024-11-06 09:04:31.333060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.052 [2024-11-06 09:04:31.333094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.052 [2024-11-06 09:04:31.333112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.311 [2024-11-06 09:04:31.340999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.311 [2024-11-06 09:04:31.341032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.311 [2024-11-06 09:04:31.341049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.311 [2024-11-06 09:04:31.347094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.311 [2024-11-06 09:04:31.347133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.311 [2024-11-06 09:04:31.347152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.311 [2024-11-06 09:04:31.353251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.311 [2024-11-06 09:04:31.353283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.311 [2024-11-06 09:04:31.353301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.311 [2024-11-06 09:04:31.359860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.311 [2024-11-06 09:04:31.359892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.311 [2024-11-06 09:04:31.359911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.311 [2024-11-06 09:04:31.367301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.367349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.367366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.373319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.373352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.373370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.379001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.379033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.379050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.384697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.384729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.384747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.390475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.390524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.390542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.396397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.396430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.396448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.402182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.402216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.402233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.407932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.407964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.407982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.413232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.413264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.413282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.419466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.419500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.419517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.426753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.426785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.426803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.432589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.432621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.432639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.438369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.438402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.438419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.444322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.444354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.444372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.450075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.450107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.450131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.455962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.455995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.456014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.461287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.461320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.461338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.466604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.466652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.466669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.472104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.472137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.472156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.477602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.477634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.477667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.483131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.483179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.483196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.488867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.488899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.488918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.494518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.494552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.494570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.500892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.500930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.500949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.312 [2024-11-06 09:04:31.505477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.312 [2024-11-06 09:04:31.505510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.312 [2024-11-06 09:04:31.505529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.511348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.511378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.511409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.518432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.518478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.518494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.525703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.525747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.525764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.532903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.532950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.532968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.539599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.539644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.539660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.546389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.546422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.546440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.552818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.552873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.552891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.559876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.559928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.559946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.566897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.566946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.566964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.573605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.573637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.573670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.580871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.580918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.580935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.588012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.588062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.588080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.594183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.594268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.594304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.313 [2024-11-06 09:04:31.600042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.313 [2024-11-06 09:04:31.600075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.313 [2024-11-06 09:04:31.600093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.572 [2024-11-06 09:04:31.607491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.572 [2024-11-06 09:04:31.607538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.572 [2024-11-06 09:04:31.607555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.572 [2024-11-06 09:04:31.613422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.572 [2024-11-06 09:04:31.613455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.572 [2024-11-06 09:04:31.613480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.572 [2024-11-06 09:04:31.618922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.572 [2024-11-06 09:04:31.618955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.572 [2024-11-06 09:04:31.618973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.572 [2024-11-06 09:04:31.624373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.572 [2024-11-06 09:04:31.624405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.572 [2024-11-06 09:04:31.624423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.572 [2024-11-06 09:04:31.629845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.572 [2024-11-06 09:04:31.629877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.572 [2024-11-06 09:04:31.629895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.572 [2024-11-06 09:04:31.635265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.572 [2024-11-06 09:04:31.635298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.572 [2024-11-06 09:04:31.635315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.572 [2024-11-06 09:04:31.640713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.572 [2024-11-06 09:04:31.640745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.572 [2024-11-06 09:04:31.640763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.572 [2024-11-06 09:04:31.646206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.572 [2024-11-06 09:04:31.646239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.572 [2024-11-06 09:04:31.646257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.572 [2024-11-06 09:04:31.651733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.572 [2024-11-06 09:04:31.651766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.573 [2024-11-06 09:04:31.651784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:18.573 [2024-11-06 09:04:31.657244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.573 [2024-11-06 09:04:31.657276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.573 [2024-11-06 09:04:31.657309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:18.573 [2024-11-06 09:04:31.662882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.573 [2024-11-06 09:04:31.662921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.573 [2024-11-06 09:04:31.662940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:18.573 4866.00 IOPS, 608.25 MiB/s [2024-11-06T08:04:31.862Z] [2024-11-06 09:04:31.669885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8cdc20) 00:28:18.573 [2024-11-06 09:04:31.669918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:18.573 [2024-11-06 09:04:31.669936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:18.573 00:28:18.573 Latency(us) 00:28:18.573 [2024-11-06T08:04:31.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.573 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:18.573 nvme0n1 : 2.00 4866.21 608.28 0.00 0.00 3282.84 819.20 10728.49 00:28:18.573 [2024-11-06T08:04:31.862Z] =================================================================================================================== 00:28:18.573 [2024-11-06T08:04:31.862Z] Total : 4866.21 608.28 0.00 0.00 3282.84 819.20 10728.49 00:28:18.573 { 00:28:18.573 "results": [ 00:28:18.573 { 00:28:18.573 "job": "nvme0n1", 00:28:18.573 "core_mask": "0x2", 00:28:18.573 "workload": "randread", 00:28:18.573 "status": "finished", 00:28:18.573 "queue_depth": 16, 00:28:18.573 "io_size": 131072, 00:28:18.573 "runtime": 2.0032, 00:28:18.573 "iops": 4866.2140575079875, 00:28:18.573 "mibps": 608.2767571884984, 00:28:18.573 "io_failed": 0, 00:28:18.573 "io_timeout": 0, 00:28:18.573 "avg_latency_us": 3282.8351555494764, 00:28:18.573 "min_latency_us": 819.2, 00:28:18.573 "max_latency_us": 10728.485925925926 00:28:18.573 } 00:28:18.573 ], 00:28:18.573 "core_count": 1 00:28:18.573 } 00:28:18.573 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:18.573 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:18.573 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:18.573 | .driver_specific 00:28:18.573 | .nvme_error 00:28:18.573 | .status_code 00:28:18.573 | .command_transient_transport_error' 00:28:18.573 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:18.831 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 314 > 0 )) 00:28:18.831 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 931663 00:28:18.831 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 931663 ']' 00:28:18.831 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 931663 00:28:18.831 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:18.831 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:18.831 09:04:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931663 00:28:18.831 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:18.831 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:18.831 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931663' 00:28:18.831 killing process with pid 931663 00:28:18.831 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 931663 00:28:18.831 Received shutdown signal, test time was about 2.000000 seconds 00:28:18.831 00:28:18.831 Latency(us) 00:28:18.831 [2024-11-06T08:04:32.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.831 [2024-11-06T08:04:32.120Z] =================================================================================================================== 00:28:18.831 [2024-11-06T08:04:32.120Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.831 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 931663 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=932073 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 932073 /var/tmp/bperf.sock 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 932073 ']' 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:19.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.089 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.090 [2024-11-06 09:04:32.264678] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:19.090 [2024-11-06 09:04:32.264770] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932073 ] 00:28:19.090 [2024-11-06 09:04:32.332176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.348 [2024-11-06 09:04:32.389504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.348 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.348 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:19.348 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:19.348 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:19.607 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:19.607 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.607 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.607 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.607 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.607 09:04:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.865 nvme0n1 00:28:19.865 09:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:19.865 09:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.865 09:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:19.865 09:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.865 09:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:19.865 09:04:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:20.201 Running I/O for 2 seconds... 00:28:20.201 [2024-11-06 09:04:33.250000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef6458 00:28:20.201 [2024-11-06 09:04:33.251132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.201 [2024-11-06 09:04:33.251173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:20.201 [2024-11-06 09:04:33.262399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee9168 00:28:20.201 [2024-11-06 09:04:33.263083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.201 [2024-11-06 09:04:33.263114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:20.201 [2024-11-06 09:04:33.276765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016efc998 00:28:20.201 [2024-11-06 09:04:33.278516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.201 [2024-11-06 09:04:33.278559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:20.201 [2024-11-06 09:04:33.287911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016edfdc0 00:28:20.201 [2024-11-06 09:04:33.289148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.201 [2024-11-06 09:04:33.289191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:20.201 [2024-11-06 09:04:33.298635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef57b0 00:28:20.201 [2024-11-06 09:04:33.300401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.201 [2024-11-06 09:04:33.300431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:20.201 [2024-11-06 09:04:33.309862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee6fa8 00:28:20.201 [2024-11-06 09:04:33.310714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.201 [2024-11-06 09:04:33.310756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:20.201 [2024-11-06 09:04:33.322163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee95a0 00:28:20.201 [2024-11-06 09:04:33.323177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.201 [2024-11-06 09:04:33.323205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:20.201 [2024-11-06 09:04:33.333404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee99d8 00:28:20.201 [2024-11-06 09:04:33.334277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.201 [2024-11-06 09:04:33.334318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:20.201 [2024-11-06 09:04:33.344862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee6738 00:28:20.201 [2024-11-06 09:04:33.345655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.201 [2024-11-06 09:04:33.345682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:20.202 [2024-11-06 09:04:33.357204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef9b30 00:28:20.202 [2024-11-06 09:04:33.358123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.202 [2024-11-06 09:04:33.358166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:20.202 [2024-11-06 09:04:33.371392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef2d80 00:28:20.202 [2024-11-06 09:04:33.372949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.202 [2024-11-06 09:04:33.372977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:20.202 [2024-11-06 09:04:33.382012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee2c28 00:28:20.202 [2024-11-06 09:04:33.383874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.202 [2024-11-06 09:04:33.383903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:20.202 [2024-11-06 09:04:33.393155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef6020 00:28:20.202 [2024-11-06 09:04:33.394003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.202 [2024-11-06 09:04:33.394046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:20.202 [2024-11-06 09:04:33.405410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef31b8 00:28:20.202 [2024-11-06 09:04:33.406389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.202 [2024-11-06 09:04:33.406432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:20.202 [2024-11-06 09:04:33.416625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef2d80 00:28:20.202 [2024-11-06 09:04:33.417562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.202 [2024-11-06 09:04:33.417591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:20.481 [2024-11-06 09:04:33.430032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee23b8 00:28:20.481 [2024-11-06 09:04:33.431474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.481 [2024-11-06 09:04:33.431504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:20.481 [2024-11-06 09:04:33.441633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eddc00 00:28:20.481 [2024-11-06 09:04:33.442765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.481 [2024-11-06 09:04:33.442807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:20.481 [2024-11-06 09:04:33.454296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee0630 00:28:20.481 [2024-11-06 09:04:33.455765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.481 [2024-11-06 09:04:33.455808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:20.481 [2024-11-06 09:04:33.466814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee5658 00:28:20.481 [2024-11-06 09:04:33.468473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.481 [2024-11-06 09:04:33.468516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.477632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef92c0 00:28:20.482 [2024-11-06 09:04:33.479544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.479573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.490102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ede8a8 00:28:20.482 [2024-11-06 09:04:33.491151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.491179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.502212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eed920 00:28:20.482 [2024-11-06 09:04:33.503878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.503906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.514461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016efc998 00:28:20.482 [2024-11-06 09:04:33.515992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.516020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.524891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016efef90 00:28:20.482 [2024-11-06 09:04:33.526574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.526607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.537349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016edf118 00:28:20.482 [2024-11-06 09:04:33.538335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.538364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.548481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef1ca0 00:28:20.482 [2024-11-06 09:04:33.549294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.549323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.559519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eefae0 00:28:20.482 [2024-11-06 09:04:33.560216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.560244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.573359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef1ca0 00:28:20.482 [2024-11-06 09:04:33.574762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.574805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.584999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef7da8 00:28:20.482 [2024-11-06 09:04:33.586491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.586533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.596021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef7970 00:28:20.482 [2024-11-06 09:04:33.597408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.597451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.607937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee2c28 00:28:20.482 [2024-11-06 09:04:33.609213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.609241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.619924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef35f0 00:28:20.482 [2024-11-06 09:04:33.620847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.620876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.631532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee5658 00:28:20.482 [2024-11-06 09:04:33.632851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.632880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.643450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee0630 00:28:20.482 [2024-11-06 09:04:33.644641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.644683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.654648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eea248 00:28:20.482 [2024-11-06 09:04:33.655422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.655465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.666921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016efb048 00:28:20.482 [2024-11-06 09:04:33.667516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.667545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.681158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee9168 00:28:20.482 [2024-11-06 09:04:33.682774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.682817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.693325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee73e0 00:28:20.482 [2024-11-06 09:04:33.695012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.695055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.704687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef8e88 00:28:20.482 [2024-11-06 09:04:33.706232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.706274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.713290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef5378 00:28:20.482 [2024-11-06 09:04:33.714054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.714099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.725464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eecc78 00:28:20.482 [2024-11-06 09:04:33.726217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.726261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.739372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef1868 00:28:20.482 [2024-11-06 09:04:33.740269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.740298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:20.482 [2024-11-06 09:04:33.750533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee6fa8 00:28:20.482 [2024-11-06 09:04:33.751313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.482 [2024-11-06 09:04:33.751341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.761696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee3d08 00:28:20.758 [2024-11-06 09:04:33.762323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.762352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.775763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee49b0 00:28:20.758 [2024-11-06 09:04:33.777520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.777564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.788341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eebfd0 00:28:20.758 [2024-11-06 09:04:33.790198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.790241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.796920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee190 00:28:20.758 [2024-11-06 09:04:33.797872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.797915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.811597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee88f8 00:28:20.758 [2024-11-06 09:04:33.813136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.813179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.822454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee4de8 00:28:20.758 [2024-11-06 09:04:33.824079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.824109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.835025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee27f0 00:28:20.758 [2024-11-06 09:04:33.836104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.836140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.845792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef5378 00:28:20.758 [2024-11-06 09:04:33.847221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.847250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.859705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee6fa8 00:28:20.758 [2024-11-06 09:04:33.861320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.861363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.870094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee27f0 00:28:20.758 [2024-11-06 09:04:33.871166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.871195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.884615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eec408 00:28:20.758 [2024-11-06 09:04:33.886517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.886561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.893154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee4de8 00:28:20.758 [2024-11-06 09:04:33.894015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.894044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.904948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee7818 00:28:20.758 [2024-11-06 09:04:33.905944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.905988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.917115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eedd58 00:28:20.758 [2024-11-06 09:04:33.918122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.918166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.932051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee27f0 00:28:20.758 [2024-11-06 09:04:33.933855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.933907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.942957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee6b70 00:28:20.758 [2024-11-06 09:04:33.944174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.944203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.954066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eebb98 00:28:20.758 [2024-11-06 09:04:33.955339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.758 [2024-11-06 09:04:33.955368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:20.758 [2024-11-06 09:04:33.966337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016efa7d8 00:28:20.758 [2024-11-06 09:04:33.967327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.759 [2024-11-06 09:04:33.967356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:20.759 [2024-11-06 09:04:33.977391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016efbcf0 00:28:20.759 [2024-11-06 09:04:33.978215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.759 [2024-11-06 09:04:33.978243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:20.759 [2024-11-06 09:04:33.989459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee01f8 00:28:20.759 [2024-11-06 09:04:33.990363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.759 [2024-11-06 09:04:33.990406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:20.759 [2024-11-06 09:04:34.003553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef81e0 00:28:20.759 [2024-11-06 09:04:34.004589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.759 [2024-11-06 09:04:34.004618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:20.759 [2024-11-06 09:04:34.014383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ef6cc8 00:28:20.759 [2024-11-06 09:04:34.015678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.759 [2024-11-06 09:04:34.015707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:20.759 [2024-11-06 09:04:34.026446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee1710 00:28:20.759 [2024-11-06 09:04:34.027628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.759 [2024-11-06 09:04:34.027670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:20.759 [2024-11-06 09:04:34.041144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016ee27f0 00:28:20.759 [2024-11-06 09:04:34.042975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:20.759 [2024-11-06 09:04:34.043004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.049705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee190 00:28:21.017 [2024-11-06 09:04:34.050705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.050732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.064238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016efda78 00:28:21.017 [2024-11-06 09:04:34.065737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.065781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.075158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eddc00 00:28:21.017 [2024-11-06 09:04:34.076769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.076798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.086344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016efa7d8 00:28:21.017 [2024-11-06 09:04:34.087234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.087262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.099919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.017 [2024-11-06 09:04:34.100156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.100185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.113894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.017 [2024-11-06 09:04:34.114133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.114174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.127879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.017 [2024-11-06 09:04:34.128116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.128159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.141771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.017 [2024-11-06 09:04:34.142034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.142063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.155636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.017 [2024-11-06 09:04:34.155903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.155930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.169592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.017 [2024-11-06 09:04:34.169874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.169903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.183547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.017 [2024-11-06 09:04:34.183797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.183848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.197572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.017 [2024-11-06 09:04:34.197818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.017 [2024-11-06 09:04:34.197852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.017 [2024-11-06 09:04:34.211452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.018 [2024-11-06 09:04:34.211697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.018 [2024-11-06 09:04:34.211724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.018 [2024-11-06 09:04:34.225440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.018 [2024-11-06 09:04:34.225690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.018 [2024-11-06 09:04:34.225718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.018 20765.00 IOPS, 81.11 MiB/s [2024-11-06T08:04:34.307Z] [2024-11-06 09:04:34.239446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.018 [2024-11-06 09:04:34.239761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.018 [2024-11-06 09:04:34.239790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.018 [2024-11-06 09:04:34.253319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.018 [2024-11-06 09:04:34.253580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.018 [2024-11-06 09:04:34.253621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.018 [2024-11-06 09:04:34.267080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.018 [2024-11-06 09:04:34.267354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.018 [2024-11-06 09:04:34.267381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.018 [2024-11-06 09:04:34.280743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.018 [2024-11-06 09:04:34.280988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.018 [2024-11-06 09:04:34.281023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.018 [2024-11-06 09:04:34.294390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.018 [2024-11-06 09:04:34.294725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.018 [2024-11-06 09:04:34.294767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.308064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.308326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.308368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.321990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.322231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.322259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.335985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.336344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.336373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.350103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.350433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.350478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.364366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.364709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.364737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.378361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.378609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.378652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.392419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.392757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.392800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.406574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.406889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.406917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.420741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.421027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.421056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.434589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.434853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.434881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.448672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.448953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.448981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.462573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.462840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.462868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.476577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.476903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.476931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.490599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.490870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.490899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.504497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.504783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.504811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.518404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.518671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.518699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.532274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.532520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.532562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.546182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.546528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.546556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.276 [2024-11-06 09:04:34.560232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.276 [2024-11-06 09:04:34.560514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.276 [2024-11-06 09:04:34.560542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.573690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.573973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.534 [2024-11-06 09:04:34.574001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.587696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.588043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.534 [2024-11-06 09:04:34.588071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.601858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.602218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.534 [2024-11-06 09:04:34.602245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.615829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.616175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.534 [2024-11-06 09:04:34.616202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.630007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.630294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.534 [2024-11-06 09:04:34.630336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.644081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.644433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.534 [2024-11-06 09:04:34.644481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.658403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.658689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.534 [2024-11-06 09:04:34.658732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.672374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.672695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.534 [2024-11-06 09:04:34.672737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.686400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.686746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.534 [2024-11-06 09:04:34.686773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.534 [2024-11-06 09:04:34.700306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.534 [2024-11-06 09:04:34.700597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.535 [2024-11-06 09:04:34.700640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.535 [2024-11-06 09:04:34.714413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.535 [2024-11-06 09:04:34.714757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.535 [2024-11-06 09:04:34.714785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.535 [2024-11-06 09:04:34.728512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.535 [2024-11-06 09:04:34.728778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.535 [2024-11-06 09:04:34.728819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.535 [2024-11-06 09:04:34.742504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.535 [2024-11-06 09:04:34.742808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.535 [2024-11-06 09:04:34.742843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.535 [2024-11-06 09:04:34.756375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.535 [2024-11-06 09:04:34.756627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.535 [2024-11-06 09:04:34.756668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.535 [2024-11-06 09:04:34.770307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.535 [2024-11-06 09:04:34.770589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.535 [2024-11-06 09:04:34.770617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.535 [2024-11-06 09:04:34.784007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.535 [2024-11-06 09:04:34.784305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.535 [2024-11-06 09:04:34.784347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.535 [2024-11-06 09:04:34.797907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.535 [2024-11-06 09:04:34.798221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.535 [2024-11-06 09:04:34.798248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.535 [2024-11-06 09:04:34.811910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.535 [2024-11-06 09:04:34.812175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.535 [2024-11-06 09:04:34.812217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.825515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.825859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.825887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.838968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.839217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.839258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.852964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.853229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.853271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.867220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.867488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.867531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.881182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.881459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.881485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.895166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.895483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.895526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.909103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.909403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.909446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.923400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.923753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.937324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.937613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.937656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.951520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.951808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.951860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.965475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.965935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.965963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.979460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.979753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.979795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:34.993462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:34.993711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:34.993737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:35.007477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:35.007729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:35.007756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:35.021466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:35.021687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:35.021730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:35.035329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:35.035609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:35.035651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:35.049524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:35.049918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:35.049960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:35.063568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:35.063869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:35.063897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.793 [2024-11-06 09:04:35.077525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:21.793 [2024-11-06 09:04:35.077772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:21.793 [2024-11-06 09:04:35.077800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.090744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.090990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.091018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.104263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.104589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.104633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.118253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.118552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.118580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.132600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.132929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.132962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.146606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.146913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.146941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.160935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.161187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.161229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.175006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.175259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.175301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.189120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.189449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.189492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.203047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.203378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.203406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.217310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.217577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.217619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 [2024-11-06 09:04:35.231397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb32d50) with pdu=0x200016eee5c8 00:28:22.052 [2024-11-06 09:04:35.231664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.052 [2024-11-06 09:04:35.231705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.052 19530.50 IOPS, 76.29 MiB/s 00:28:22.052 Latency(us) 00:28:22.052 [2024-11-06T08:04:35.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.052 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:22.052 nvme0n1 : 2.01 19531.52 76.30 0.00 0.00 6538.40 2742.80 16311.18 00:28:22.052 [2024-11-06T08:04:35.341Z] =================================================================================================================== 00:28:22.052 [2024-11-06T08:04:35.341Z] Total : 19531.52 76.30 0.00 0.00 6538.40 2742.80 16311.18 00:28:22.052 { 00:28:22.052 "results": [ 00:28:22.052 { 00:28:22.052 "job": "nvme0n1", 00:28:22.052 "core_mask": "0x2", 00:28:22.052 "workload": "randwrite", 00:28:22.052 "status": "finished", 00:28:22.052 "queue_depth": 128, 00:28:22.052 "io_size": 4096, 00:28:22.052 "runtime": 2.008087, 00:28:22.052 "iops": 19531.524281567483, 00:28:22.052 "mibps": 76.29501672487298, 00:28:22.052 "io_failed": 0, 00:28:22.052 "io_timeout": 0, 00:28:22.052 "avg_latency_us": 6538.401703244767, 00:28:22.052 "min_latency_us": 2742.8029629629627, 00:28:22.052 "max_latency_us": 16311.182222222222 00:28:22.052 } 00:28:22.052 ], 00:28:22.052 "core_count": 1 00:28:22.052 } 00:28:22.052 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:22.052 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:22.052 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:22.052 | .driver_specific 00:28:22.052 | .nvme_error 00:28:22.052 | .status_code 00:28:22.052 | .command_transient_transport_error' 00:28:22.052 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 932073 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 932073 ']' 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 932073 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 932073 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 932073' 00:28:22.310 killing process with pid 932073 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 932073 00:28:22.310 Received shutdown signal, test time was about 2.000000 seconds 00:28:22.310 00:28:22.310 Latency(us) 00:28:22.310 [2024-11-06T08:04:35.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.310 [2024-11-06T08:04:35.599Z] =================================================================================================================== 00:28:22.310 [2024-11-06T08:04:35.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.310 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 932073 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=932483 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 932483 /var/tmp/bperf.sock 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 932483 ']' 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:22.568 09:04:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.568 [2024-11-06 09:04:35.842400] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:22.568 [2024-11-06 09:04:35.842485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932483 ] 00:28:22.568 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:22.568 Zero copy mechanism will not be used. 00:28:22.826 [2024-11-06 09:04:35.909073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.826 [2024-11-06 09:04:35.967485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.826 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:22.826 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:22.826 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.826 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:23.083 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:23.083 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.083 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.340 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.340 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.340 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.598 nvme0n1 00:28:23.598 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:23.598 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.598 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:23.598 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.598 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:23.598 09:04:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.598 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:23.598 Zero copy mechanism will not be used. 00:28:23.598 Running I/O for 2 seconds... 00:28:23.598 [2024-11-06 09:04:36.840828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.841168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.841208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.598 [2024-11-06 09:04:36.846224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.846594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.846625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.598 [2024-11-06 09:04:36.851649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.851967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.851997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.598 [2024-11-06 09:04:36.856977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.857262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.857293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.598 [2024-11-06 09:04:36.862036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.862338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.862382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.598 [2024-11-06 09:04:36.867193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.867500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.867530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.598 [2024-11-06 09:04:36.872205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.872487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.872518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.598 [2024-11-06 09:04:36.877131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.877413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.877442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.598 [2024-11-06 09:04:36.882108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.882389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.882425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.598 [2024-11-06 09:04:36.887091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.598 [2024-11-06 09:04:36.887371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.598 [2024-11-06 09:04:36.887401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.892157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.892461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.892490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.897279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.897585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.897614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.902591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.902884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.902914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.907773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.908104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.908134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.913307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.913575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.913605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.919137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.919471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.919513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.925195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.925509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.925537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.930564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.930889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.930919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.935939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.936232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.936261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.941364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.941647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.941691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.946966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.947315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.947344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.953448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.953764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.953793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.960016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.960309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.960338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.966540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.966843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.966872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.972904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.973175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.857 [2024-11-06 09:04:36.973221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.857 [2024-11-06 09:04:36.979010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.857 [2024-11-06 09:04:36.979298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:36.979327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:36.985528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:36.985804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:36.985842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:36.991720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:36.991992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:36.992022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:36.996774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:36.997061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:36.997090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.002207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.002487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.002516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.007497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.007747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.007789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.012699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.012970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.012999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.017827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.018086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.018130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.022758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.023032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.023061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.027474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.027736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.027770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.032775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.033046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.033076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.038896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.039149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.039180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.044223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.044476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.044505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.050405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.050655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.050685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.056439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.056694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.056723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.062809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.063076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.063106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.068919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.069173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.069203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.074904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.075283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.075326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.081148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.081432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.081461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.087205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.087485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.087515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.093248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.093526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.093555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.099154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.099437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.099466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.106183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.106547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.106576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.112380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.112672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.112700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.118533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.118811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.118847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.124708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.125067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.125097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.131638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.131990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.858 [2024-11-06 09:04:37.132020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.858 [2024-11-06 09:04:37.138369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.858 [2024-11-06 09:04:37.138631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.859 [2024-11-06 09:04:37.138660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:23.859 [2024-11-06 09:04:37.143752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:23.859 [2024-11-06 09:04:37.144011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.859 [2024-11-06 09:04:37.144041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.148416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.148680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.148708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.153032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.153285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.153329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.157827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.158112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.158140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.162598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.162872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.162900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.167354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.167614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.167643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.172163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.172444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.172473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.176881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.177146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.177203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.181638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.181894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.181923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.186391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.186669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.186697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.191153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.191402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.191431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.117 [2024-11-06 09:04:37.195964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.117 [2024-11-06 09:04:37.196231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.117 [2024-11-06 09:04:37.196259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.201160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.201455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.201484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.206389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.206669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.206699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.211045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.211308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.211337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.215712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.216000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.216030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.220504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.220772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.220803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.225341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.225632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.225660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.230153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.230404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.230432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.234976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.235253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.235283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.239818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.240094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.240138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.244651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.244927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.244956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.249394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.249671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.249700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.254225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.254490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.254534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.258908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.259190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.259220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.263678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.263967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.263997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.268495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.268758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.268786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.273430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.273697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.273725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.278346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.278607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.278635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.283168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.283433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.283461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.288128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.288391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.288419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.293014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.293280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.293308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.297784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.298056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.298086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.302584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.302858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.302903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.307363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.307627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.307655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.312325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.312587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.312616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.317061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.317327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.317355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.321955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.322217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.322245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.326815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.118 [2024-11-06 09:04:37.327086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.118 [2024-11-06 09:04:37.327114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.118 [2024-11-06 09:04:37.331619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.331891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.331920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.336325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.336574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.336604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.340983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.341264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.341294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.345639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.345903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.345932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.350268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.350520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.350549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.355327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.355611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.355640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.360604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.360865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.360904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.365455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.365719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.365747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.370231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.370482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.370511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.374883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.375159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.375187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.379600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.379858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.379887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.384303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.384552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.384595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.389014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.389266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.389309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.393728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.393988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.394017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.398628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.398885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.398915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.119 [2024-11-06 09:04:37.403315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.119 [2024-11-06 09:04:37.403565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.119 [2024-11-06 09:04:37.403594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.407989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.408243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.408271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.412665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.412939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.412969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.417503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.417762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.417806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.422273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.422540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.422569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.427115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.427391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.427439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.431829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.432117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.432146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.436686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.436956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.436985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.441443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.441706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.441735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.446296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.446548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.446577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.451130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.451433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.451463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.455922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.456188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.456216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.460894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.461172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.461201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.465645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.465935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.465965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.470523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.470790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.470819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.475252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.475532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.475560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.378 [2024-11-06 09:04:37.480000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.378 [2024-11-06 09:04:37.480261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.378 [2024-11-06 09:04:37.480289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.484779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.485033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.485062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.489543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.489802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.489830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.494303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.494565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.494593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.499105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.499368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.499396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.503877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.504144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.504172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.508670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.508958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.508987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.513523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.513789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.513818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.518522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.518900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.518944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.524551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.524813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.524850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.529426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.529689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.529718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.534185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.534435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.534479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.538955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.539218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.539246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.543754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.544048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.544077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.548594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.548852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.548881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.553972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.554240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.554275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.559845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.560097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.560126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.566854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.567119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.567148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.573251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.573517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.573546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.579135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.579384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.579414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.584540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.584794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.584823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.590695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.590957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.590987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.595648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.595910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.595941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.600385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.600635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.600665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.605089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.605340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.379 [2024-11-06 09:04:37.605369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.379 [2024-11-06 09:04:37.609849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.379 [2024-11-06 09:04:37.610110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.610139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.614619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.614876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.614908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.620658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.620950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.620980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.626114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.626364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.626393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.632570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.632865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.632896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.638355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.638610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.638640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.643296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.643547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.643576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.647986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.648237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.648272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.652719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.652978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.653007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.657414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.657662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.657691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.662102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.380 [2024-11-06 09:04:37.662354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.380 [2024-11-06 09:04:37.662384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.380 [2024-11-06 09:04:37.666764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.667021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.667052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.671428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.671679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.671708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.676211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.676460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.676490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.681158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.681408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.681438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.685900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.686153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.686182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.690627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.690892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.690922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.695354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.695603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.695632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.700051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.700302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.700331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.704991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.705246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.705275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.710446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.710696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.710726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.715544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.715795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.715825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.720169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.720449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.720478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.724856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.725108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.725137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.729572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.729821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.729857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.734253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.734504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.734533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.738877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.739128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.739156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.639 [2024-11-06 09:04:37.743485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.639 [2024-11-06 09:04:37.743734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.639 [2024-11-06 09:04:37.743763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.748100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.748349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.748378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.752712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.752971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.753000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.757351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.757602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.757630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.761998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.762249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.762278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.766664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.766923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.766952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.771825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.772087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.772122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.777033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.777286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.777315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.782227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.782477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.782506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.787375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.787627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.787656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.792546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.792795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.792824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.797829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.798089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.798118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.803000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.803252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.803281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.808176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.808427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.808456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.813454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.813704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.813733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.818556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.818815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.818852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.823270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.823522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.823551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.828435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.828688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.828717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.833894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.835043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.835073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.640 6011.00 IOPS, 751.38 MiB/s [2024-11-06T08:04:37.929Z] [2024-11-06 09:04:37.840818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.841141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.841171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.845807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.846098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.846126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.850713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.851001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.851031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.855604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.855894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.855924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.860462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.860744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.860773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.865386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.865667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.865695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.870630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.870919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.870948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.876532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.876800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.876830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.640 [2024-11-06 09:04:37.883248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.640 [2024-11-06 09:04:37.883559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.640 [2024-11-06 09:04:37.883589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.641 [2024-11-06 09:04:37.890016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.641 [2024-11-06 09:04:37.890300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.641 [2024-11-06 09:04:37.890330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.641 [2024-11-06 09:04:37.897805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.641 [2024-11-06 09:04:37.898094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.641 [2024-11-06 09:04:37.898124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.641 [2024-11-06 09:04:37.904250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.641 [2024-11-06 09:04:37.904519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.641 [2024-11-06 09:04:37.904549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.641 [2024-11-06 09:04:37.910633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.641 [2024-11-06 09:04:37.910998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.641 [2024-11-06 09:04:37.911028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.641 [2024-11-06 09:04:37.917018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.641 [2024-11-06 09:04:37.917181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.641 [2024-11-06 09:04:37.917215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.641 [2024-11-06 09:04:37.923615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.641 [2024-11-06 09:04:37.923921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.641 [2024-11-06 09:04:37.923950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.899 [2024-11-06 09:04:37.930194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.899 [2024-11-06 09:04:37.930510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.899 [2024-11-06 09:04:37.930539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.899 [2024-11-06 09:04:37.936701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.899 [2024-11-06 09:04:37.937019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.899 [2024-11-06 09:04:37.937049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.899 [2024-11-06 09:04:37.943215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.899 [2024-11-06 09:04:37.943528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.899 [2024-11-06 09:04:37.943558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.899 [2024-11-06 09:04:37.949627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.899 [2024-11-06 09:04:37.949966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.899 [2024-11-06 09:04:37.949995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.899 [2024-11-06 09:04:37.956136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.899 [2024-11-06 09:04:37.956352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.899 [2024-11-06 09:04:37.956381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.899 [2024-11-06 09:04:37.962320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.899 [2024-11-06 09:04:37.962580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:37.962610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:37.968550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:37.968818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:37.968857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:37.974585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:37.974860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:37.974889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:37.981018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:37.981325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:37.981354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:37.987540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:37.987804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:37.987856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:37.993552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:37.993819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:37.993872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:37.998356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:37.998620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:37.998649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.003124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.003386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.003429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.007884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.008189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.008218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.012727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.012996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.013025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.017597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.017869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.017897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.022355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.022615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.022643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.027124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.027388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.027415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.031860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.032147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.032175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.036680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.036950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.036978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.041396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.041672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.041700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.046234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.046485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.046514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.051030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.051298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.051326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.055824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.056098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.056126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.060697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.060989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.061022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.065592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.065880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.065908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.070525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.070774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.070803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.076407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.076702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.076730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.082494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.082790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.082818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.900 [2024-11-06 09:04:38.089655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.900 [2024-11-06 09:04:38.089917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.900 [2024-11-06 09:04:38.089947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.096006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.096271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.096299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.100811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.101071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.101100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.105485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.105786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.105814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.110291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.110558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.110586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.115218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.115481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.115509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.120065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.120345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.120374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.124790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.125082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.125111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.129621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.129892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.129919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.134510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.134770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.134799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.139151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.139410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.139438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.143809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.144079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.144108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.148630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.148899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.148927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.153378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.153626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.153655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.158148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.158410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.158453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.162905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.163169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.163197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.167627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.167894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.167922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.172366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.172624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.172652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.177177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.177429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.177457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.181780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.182048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.182091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:24.901 [2024-11-06 09:04:38.186482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:24.901 [2024-11-06 09:04:38.186730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.901 [2024-11-06 09:04:38.186758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.191126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.191387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.191420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.195815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.196078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.196121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.200518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.200795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.200842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.205319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.205606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.205635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.210034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.210310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.210339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.214785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.215073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.215102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.219665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.219938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.219967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.224383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.224645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.224674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.229099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.229370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.229398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.233821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.234088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.234116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.238589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.238864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.238892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.243347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.243615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.243643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.249411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.249727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.249757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.254656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.254914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.254944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.259401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.259650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.259694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.264159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.264424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.264466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.269045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.269296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.269339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.273844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.274108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.274136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.278778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.279033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.279063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.283745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.284002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.284031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.288778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.289031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.289061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.293555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.293829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.160 [2024-11-06 09:04:38.293866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.160 [2024-11-06 09:04:38.298335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.160 [2024-11-06 09:04:38.298596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.298640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.302957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.303222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.303250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.307628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.307896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.307939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.312375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.312639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.312667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.317090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.317361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.317394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.321903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.322153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.322182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.326646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.326917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.326945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.331406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.331669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.331697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.336164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.336437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.336466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.340938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.341192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.341220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.346137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.346399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.346427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.351094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.351384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.351413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.357173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.357428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.357457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.362946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.363212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.363240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.367813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.368072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.368101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.372624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.372910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.372940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.377400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.377663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.377692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.382026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.382286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.382314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.386879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.387163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.387206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.391690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.391961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.391990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.396582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.396840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.396884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.401399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.401680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.401713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.406130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.406393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.406421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.411459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.411800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.411853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.417488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.417754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.417783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.422377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.422641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.422669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.427181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.427443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.161 [2024-11-06 09:04:38.427471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.161 [2024-11-06 09:04:38.431883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.161 [2024-11-06 09:04:38.432149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.162 [2024-11-06 09:04:38.432177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.162 [2024-11-06 09:04:38.436624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.162 [2024-11-06 09:04:38.436923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.162 [2024-11-06 09:04:38.436953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.162 [2024-11-06 09:04:38.441450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.162 [2024-11-06 09:04:38.441711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.162 [2024-11-06 09:04:38.441739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.162 [2024-11-06 09:04:38.446957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.162 [2024-11-06 09:04:38.447219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.162 [2024-11-06 09:04:38.447248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.453370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.453620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.453649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.459602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.459896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.459927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.465093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.465355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.465384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.470294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.470557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.470587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.475595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.475854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.475888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.480464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.480729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.480757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.486075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.486342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.486372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.491343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.491604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.491632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.496173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.496434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.496463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.500959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.501227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.501256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.505667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.505924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.505953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.510363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.510612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.510640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.515096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.515360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.515388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.519742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.520000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.520030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.524569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.524826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.524864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.529319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.529571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.529600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.534055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.534319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.534352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.538813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.539077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.539105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.543626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.543906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.543934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.548416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.548664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.548693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.553161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.553434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.553463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.421 [2024-11-06 09:04:38.558058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.421 [2024-11-06 09:04:38.558328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.421 [2024-11-06 09:04:38.558357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.562986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.563252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.563280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.567863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.568133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.568162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.572612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.572872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.572901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.577512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.577782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.577810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.582233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.582508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.582541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.587022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.587287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.587315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.591872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.592158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.592187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.596693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.596953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.596983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.601387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.601639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.601668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.605995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.606260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.606288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.610794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.611083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.611113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.615570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.615822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.615858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.620377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.620639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.620667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.625030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.625293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.625321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.629778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.630035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.630064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.634506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.634769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.634797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.639263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.639526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.639554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.643965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.644232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.644275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.649028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.649318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.649347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.654092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.654356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.654385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.659039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.659304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.659341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.663813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.664077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.664119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.669023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.669272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.669301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.675016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.675332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.675363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.681569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.681876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.681906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.688431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.688681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.688711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.694144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.422 [2024-11-06 09:04:38.694395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.422 [2024-11-06 09:04:38.694424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.422 [2024-11-06 09:04:38.698853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.423 [2024-11-06 09:04:38.699105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.423 [2024-11-06 09:04:38.699134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.423 [2024-11-06 09:04:38.703523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.423 [2024-11-06 09:04:38.703800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.423 [2024-11-06 09:04:38.703852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.423 [2024-11-06 09:04:38.708229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.423 [2024-11-06 09:04:38.708499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.423 [2024-11-06 09:04:38.708543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.712950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.713230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.713258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.717582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.717838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.717867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.722371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.722633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.722661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.727152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.727404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.727447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.731867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.732120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.732150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.736620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.736909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.736938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.742063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.742313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.742342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.747719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.747980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.748010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.752545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.752797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.752825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.757337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.757588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.757617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.762179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.762457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.762485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.767084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.767366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.767396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.771808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.772065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.772094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.776541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.776818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.776857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.781510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.781760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.781789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.786949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.787217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.787245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.792320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.792599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.792634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.797742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.798003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.798032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.803304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.681 [2024-11-06 09:04:38.803555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.681 [2024-11-06 09:04:38.803584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.681 [2024-11-06 09:04:38.809341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.682 [2024-11-06 09:04:38.809591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.682 [2024-11-06 09:04:38.809620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.682 [2024-11-06 09:04:38.814864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.682 [2024-11-06 09:04:38.815116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.682 [2024-11-06 09:04:38.815146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.682 [2024-11-06 09:04:38.820576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.682 [2024-11-06 09:04:38.820826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.682 [2024-11-06 09:04:38.820863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:25.682 [2024-11-06 09:04:38.825708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.682 [2024-11-06 09:04:38.825966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.682 [2024-11-06 09:04:38.825995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:25.682 [2024-11-06 09:04:38.831192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.682 [2024-11-06 09:04:38.831445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.682 [2024-11-06 09:04:38.831474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.682 6026.50 IOPS, 753.31 MiB/s [2024-11-06T08:04:38.971Z] [2024-11-06 09:04:38.837623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb33090) with pdu=0x200016efef90 00:28:25.682 [2024-11-06 09:04:38.837814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.682 [2024-11-06 09:04:38.837851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:25.682 00:28:25.682 Latency(us) 00:28:25.682 [2024-11-06T08:04:38.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.682 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:25.682 nvme0n1 : 2.00 6023.80 752.97 0.00 0.00 2648.57 2184.53 7573.05 00:28:25.682 [2024-11-06T08:04:38.971Z] =================================================================================================================== 00:28:25.682 [2024-11-06T08:04:38.971Z] Total : 6023.80 752.97 0.00 0.00 2648.57 2184.53 7573.05 00:28:25.682 { 00:28:25.682 "results": [ 00:28:25.682 { 00:28:25.682 "job": "nvme0n1", 00:28:25.682 "core_mask": "0x2", 00:28:25.682 "workload": "randwrite", 00:28:25.682 "status": "finished", 00:28:25.682 "queue_depth": 16, 00:28:25.682 "io_size": 131072, 00:28:25.682 "runtime": 2.003554, 00:28:25.682 "iops": 6023.795715014419, 00:28:25.682 "mibps": 752.9744643768024, 00:28:25.682 "io_failed": 0, 00:28:25.682 "io_timeout": 0, 00:28:25.682 "avg_latency_us": 2648.5709422671493, 00:28:25.682 "min_latency_us": 2184.5333333333333, 00:28:25.682 "max_latency_us": 7573.0488888888885 00:28:25.682 } 00:28:25.682 ], 00:28:25.682 "core_count": 1 00:28:25.682 } 00:28:25.682 09:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:25.682 09:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:25.682 09:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:25.682 | .driver_specific 00:28:25.682 | .nvme_error 00:28:25.682 | .status_code 00:28:25.682 | .command_transient_transport_error' 00:28:25.682 09:04:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 389 > 0 )) 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 932483 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 932483 ']' 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 932483 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 932483 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 932483' 00:28:25.940 killing process with pid 932483 00:28:25.940 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 932483 00:28:25.941 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.941 00:28:25.941 Latency(us) 00:28:25.941 [2024-11-06T08:04:39.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.941 [2024-11-06T08:04:39.230Z] =================================================================================================================== 00:28:25.941 [2024-11-06T08:04:39.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.941 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 932483 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 931111 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 931111 ']' 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 931111 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 931111 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 931111' 00:28:26.198 killing process with pid 931111 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 931111 00:28:26.198 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 931111 00:28:26.456 00:28:26.456 real 0m15.145s 00:28:26.456 user 0m30.162s 00:28:26.456 sys 0m4.235s 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.456 ************************************ 00:28:26.456 END TEST nvmf_digest_error 00:28:26.456 ************************************ 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:26.456 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:26.456 rmmod nvme_tcp 00:28:26.456 rmmod nvme_fabrics 00:28:26.456 rmmod nvme_keyring 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 931111 ']' 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 931111 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 931111 ']' 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 931111 00:28:26.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (931111) - No such process 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 931111 is not found' 00:28:26.457 Process with pid 931111 is not found 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:26.457 09:04:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:28.991 00:28:28.991 real 0m35.800s 00:28:28.991 user 1m2.974s 00:28:28.991 sys 0m10.419s 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:28.991 ************************************ 00:28:28.991 END TEST nvmf_digest 00:28:28.991 ************************************ 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.991 ************************************ 00:28:28.991 START TEST nvmf_bdevperf 00:28:28.991 ************************************ 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:28.991 * Looking for test storage... 00:28:28.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # lcov --version 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:28.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.991 --rc genhtml_branch_coverage=1 00:28:28.991 --rc genhtml_function_coverage=1 00:28:28.991 --rc genhtml_legend=1 00:28:28.991 --rc geninfo_all_blocks=1 00:28:28.991 --rc geninfo_unexecuted_blocks=1 00:28:28.991 00:28:28.991 ' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:28.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.991 --rc genhtml_branch_coverage=1 00:28:28.991 --rc genhtml_function_coverage=1 00:28:28.991 --rc genhtml_legend=1 00:28:28.991 --rc geninfo_all_blocks=1 00:28:28.991 --rc geninfo_unexecuted_blocks=1 00:28:28.991 00:28:28.991 ' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:28.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.991 --rc genhtml_branch_coverage=1 00:28:28.991 --rc genhtml_function_coverage=1 00:28:28.991 --rc genhtml_legend=1 00:28:28.991 --rc geninfo_all_blocks=1 00:28:28.991 --rc geninfo_unexecuted_blocks=1 00:28:28.991 00:28:28.991 ' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:28.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.991 --rc genhtml_branch_coverage=1 00:28:28.991 --rc genhtml_function_coverage=1 00:28:28.991 --rc genhtml_legend=1 00:28:28.991 --rc geninfo_all_blocks=1 00:28:28.991 --rc geninfo_unexecuted_blocks=1 00:28:28.991 00:28:28.991 ' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:28.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:28.991 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:28.992 09:04:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:30.893 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:30.893 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:30.893 Found net devices under 0000:09:00.0: cvl_0_0 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:30.893 Found net devices under 0000:09:00.1: cvl_0_1 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:30.893 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:28:30.894 00:28:30.894 --- 10.0.0.2 ping statistics --- 00:28:30.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.894 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:28:30.894 00:28:30.894 --- 10.0.0.1 ping statistics --- 00:28:30.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.894 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:30.894 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=934881 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 934881 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 934881 ']' 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:31.152 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.152 [2024-11-06 09:04:44.245988] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:31.152 [2024-11-06 09:04:44.246085] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.152 [2024-11-06 09:04:44.321760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:31.152 [2024-11-06 09:04:44.381923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.152 [2024-11-06 09:04:44.381984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.152 [2024-11-06 09:04:44.381998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.152 [2024-11-06 09:04:44.382009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.152 [2024-11-06 09:04:44.382018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.152 [2024-11-06 09:04:44.383544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.152 [2024-11-06 09:04:44.383623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:31.152 [2024-11-06 09:04:44.383627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.410 [2024-11-06 09:04:44.537252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.410 Malloc0 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.410 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:31.411 [2024-11-06 09:04:44.603325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:31.411 { 00:28:31.411 "params": { 00:28:31.411 "name": "Nvme$subsystem", 00:28:31.411 "trtype": "$TEST_TRANSPORT", 00:28:31.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.411 "adrfam": "ipv4", 00:28:31.411 "trsvcid": "$NVMF_PORT", 00:28:31.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.411 "hdgst": ${hdgst:-false}, 00:28:31.411 "ddgst": ${ddgst:-false} 00:28:31.411 }, 00:28:31.411 "method": "bdev_nvme_attach_controller" 00:28:31.411 } 00:28:31.411 EOF 00:28:31.411 )") 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:31.411 09:04:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:31.411 "params": { 00:28:31.411 "name": "Nvme1", 00:28:31.411 "trtype": "tcp", 00:28:31.411 "traddr": "10.0.0.2", 00:28:31.411 "adrfam": "ipv4", 00:28:31.411 "trsvcid": "4420", 00:28:31.411 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:31.411 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:31.411 "hdgst": false, 00:28:31.411 "ddgst": false 00:28:31.411 }, 00:28:31.411 "method": "bdev_nvme_attach_controller" 00:28:31.411 }' 00:28:31.411 [2024-11-06 09:04:44.655181] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:31.411 [2024-11-06 09:04:44.655261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid934990 ] 00:28:31.669 [2024-11-06 09:04:44.722907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.669 [2024-11-06 09:04:44.781846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.927 Running I/O for 1 seconds... 00:28:32.860 8479.00 IOPS, 33.12 MiB/s 00:28:32.860 Latency(us) 00:28:32.860 [2024-11-06T08:04:46.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.860 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:32.860 Verification LBA range: start 0x0 length 0x4000 00:28:32.860 Nvme1n1 : 1.01 8530.63 33.32 0.00 0.00 14919.34 3301.07 14854.83 00:28:32.860 [2024-11-06T08:04:46.149Z] =================================================================================================================== 00:28:32.860 [2024-11-06T08:04:46.149Z] Total : 8530.63 33.32 0.00 0.00 14919.34 3301.07 14854.83 00:28:33.118 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=935133 00:28:33.118 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:33.118 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:33.118 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:33.119 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:28:33.119 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:28:33.119 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:28:33.119 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:28:33.119 { 00:28:33.119 "params": { 00:28:33.119 "name": "Nvme$subsystem", 00:28:33.119 "trtype": "$TEST_TRANSPORT", 00:28:33.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.119 "adrfam": "ipv4", 00:28:33.119 "trsvcid": "$NVMF_PORT", 00:28:33.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.119 "hdgst": ${hdgst:-false}, 00:28:33.119 "ddgst": ${ddgst:-false} 00:28:33.119 }, 00:28:33.119 "method": "bdev_nvme_attach_controller" 00:28:33.119 } 00:28:33.119 EOF 00:28:33.119 )") 00:28:33.119 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:28:33.119 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:28:33.119 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:28:33.119 09:04:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:28:33.119 "params": { 00:28:33.119 "name": "Nvme1", 00:28:33.119 "trtype": "tcp", 00:28:33.119 "traddr": "10.0.0.2", 00:28:33.119 "adrfam": "ipv4", 00:28:33.119 "trsvcid": "4420", 00:28:33.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:33.119 "hdgst": false, 00:28:33.119 "ddgst": false 00:28:33.119 }, 00:28:33.119 "method": "bdev_nvme_attach_controller" 00:28:33.119 }' 00:28:33.119 [2024-11-06 09:04:46.258053] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:33.119 [2024-11-06 09:04:46.258140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935133 ] 00:28:33.119 [2024-11-06 09:04:46.325048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.119 [2024-11-06 09:04:46.381127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.377 Running I/O for 15 seconds... 00:28:35.683 8438.00 IOPS, 32.96 MiB/s [2024-11-06T08:04:49.232Z] 8477.00 IOPS, 33.11 MiB/s [2024-11-06T08:04:49.232Z] 09:04:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 934881 00:28:35.943 09:04:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:35.943 [2024-11-06 09:04:49.223066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.943 [2024-11-06 09:04:49.223127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.943 [2024-11-06 09:04:49.223860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.943 [2024-11-06 09:04:49.223875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.223891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.223906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.223923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.223938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.223954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.223969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.223985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.224979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.224994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.225010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.225025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.225041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:48608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.225056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.944 [2024-11-06 09:04:49.225072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.944 [2024-11-06 09:04:49.225087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:48624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:48688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:48704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:48768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.225980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.225997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.226035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.226067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.226101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.226151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.226181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.226224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.226257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.226286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.945 [2024-11-06 09:04:49.226328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.945 [2024-11-06 09:04:49.226342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:48952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:35.946 [2024-11-06 09:04:49.226726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.226757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:48032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.226808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.226849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.226902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.226935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.226967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.226983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:48072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.226999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.227031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.227063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.227093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.227124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:48112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.227155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.227199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.946 [2024-11-06 09:04:49.227227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75a5c0 is same with the state(6) to be set 00:28:35.946 [2024-11-06 09:04:49.227256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:35.946 [2024-11-06 09:04:49.227267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:35.946 [2024-11-06 09:04:49.227278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48136 len:8 PRP1 0x0 PRP2 0x0 00:28:35.946 [2024-11-06 09:04:49.227290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.946 [2024-11-06 09:04:49.227460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.946 [2024-11-06 09:04:49.227489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.946 [2024-11-06 09:04:49.227533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.946 [2024-11-06 09:04:49.227561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.946 [2024-11-06 09:04:49.227575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.205 [2024-11-06 09:04:49.230941] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.205 [2024-11-06 09:04:49.230978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.205 [2024-11-06 09:04:49.231661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.205 [2024-11-06 09:04:49.231718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.205 [2024-11-06 09:04:49.231736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.205 [2024-11-06 09:04:49.231964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.205 [2024-11-06 09:04:49.232220] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.205 [2024-11-06 09:04:49.232239] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.205 [2024-11-06 09:04:49.232253] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.205 [2024-11-06 09:04:49.235283] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.205 [2024-11-06 09:04:49.244461] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.244814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.244850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.206 [2024-11-06 09:04:49.244868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.206 [2024-11-06 09:04:49.245105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.206 [2024-11-06 09:04:49.245311] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.206 [2024-11-06 09:04:49.245331] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.206 [2024-11-06 09:04:49.245344] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.206 [2024-11-06 09:04:49.248260] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.206 [2024-11-06 09:04:49.257637] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.258053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.258082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.206 [2024-11-06 09:04:49.258098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.206 [2024-11-06 09:04:49.258337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.206 [2024-11-06 09:04:49.258542] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.206 [2024-11-06 09:04:49.258562] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.206 [2024-11-06 09:04:49.258574] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.206 [2024-11-06 09:04:49.261433] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.206 [2024-11-06 09:04:49.270638] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.270993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.271021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.206 [2024-11-06 09:04:49.271037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.206 [2024-11-06 09:04:49.271272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.206 [2024-11-06 09:04:49.271478] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.206 [2024-11-06 09:04:49.271498] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.206 [2024-11-06 09:04:49.271511] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.206 [2024-11-06 09:04:49.274429] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.206 [2024-11-06 09:04:49.283757] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.284181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.284210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.206 [2024-11-06 09:04:49.284226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.206 [2024-11-06 09:04:49.284464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.206 [2024-11-06 09:04:49.284654] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.206 [2024-11-06 09:04:49.284674] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.206 [2024-11-06 09:04:49.284687] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.206 [2024-11-06 09:04:49.287617] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.206 [2024-11-06 09:04:49.296953] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.297333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.297360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.206 [2024-11-06 09:04:49.297381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.206 [2024-11-06 09:04:49.297598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.206 [2024-11-06 09:04:49.297804] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.206 [2024-11-06 09:04:49.297851] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.206 [2024-11-06 09:04:49.297867] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.206 [2024-11-06 09:04:49.300778] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.206 [2024-11-06 09:04:49.310090] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.310452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.310479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.206 [2024-11-06 09:04:49.310495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.206 [2024-11-06 09:04:49.310731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.206 [2024-11-06 09:04:49.310985] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.206 [2024-11-06 09:04:49.311007] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.206 [2024-11-06 09:04:49.311021] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.206 [2024-11-06 09:04:49.313925] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.206 [2024-11-06 09:04:49.323140] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.323554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.323581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.206 [2024-11-06 09:04:49.323597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.206 [2024-11-06 09:04:49.323829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.206 [2024-11-06 09:04:49.324059] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.206 [2024-11-06 09:04:49.324081] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.206 [2024-11-06 09:04:49.324095] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.206 [2024-11-06 09:04:49.326995] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.206 [2024-11-06 09:04:49.336254] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.336610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.336652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.206 [2024-11-06 09:04:49.336667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.206 [2024-11-06 09:04:49.336895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.206 [2024-11-06 09:04:49.337112] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.206 [2024-11-06 09:04:49.337147] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.206 [2024-11-06 09:04:49.337159] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.206 [2024-11-06 09:04:49.339948] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.206 [2024-11-06 09:04:49.349248] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.349650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.349677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.206 [2024-11-06 09:04:49.349693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.206 [2024-11-06 09:04:49.349925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.206 [2024-11-06 09:04:49.350127] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.206 [2024-11-06 09:04:49.350162] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.206 [2024-11-06 09:04:49.350176] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.206 [2024-11-06 09:04:49.353061] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.206 [2024-11-06 09:04:49.362322] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.206 [2024-11-06 09:04:49.362731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.206 [2024-11-06 09:04:49.362758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.207 [2024-11-06 09:04:49.362774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.207 [2024-11-06 09:04:49.363022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.207 [2024-11-06 09:04:49.363248] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.207 [2024-11-06 09:04:49.363268] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.207 [2024-11-06 09:04:49.363281] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.207 [2024-11-06 09:04:49.366174] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.207 [2024-11-06 09:04:49.375512] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.207 [2024-11-06 09:04:49.375858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.207 [2024-11-06 09:04:49.375886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.207 [2024-11-06 09:04:49.375902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.207 [2024-11-06 09:04:49.376140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.207 [2024-11-06 09:04:49.376346] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.207 [2024-11-06 09:04:49.376366] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.207 [2024-11-06 09:04:49.376384] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.207 [2024-11-06 09:04:49.379195] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.207 [2024-11-06 09:04:49.388644] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.207 [2024-11-06 09:04:49.389016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.207 [2024-11-06 09:04:49.389046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.207 [2024-11-06 09:04:49.389062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.207 [2024-11-06 09:04:49.389313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.207 [2024-11-06 09:04:49.389517] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.207 [2024-11-06 09:04:49.389537] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.207 [2024-11-06 09:04:49.389550] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.207 [2024-11-06 09:04:49.392400] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.207 [2024-11-06 09:04:49.401731] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.207 [2024-11-06 09:04:49.402101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.207 [2024-11-06 09:04:49.402129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.207 [2024-11-06 09:04:49.402160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.207 [2024-11-06 09:04:49.402391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.207 [2024-11-06 09:04:49.402581] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.207 [2024-11-06 09:04:49.402601] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.207 [2024-11-06 09:04:49.402613] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.207 [2024-11-06 09:04:49.405612] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.207 [2024-11-06 09:04:49.414997] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.207 [2024-11-06 09:04:49.415391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.207 [2024-11-06 09:04:49.415419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.207 [2024-11-06 09:04:49.415434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.207 [2024-11-06 09:04:49.415651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.207 [2024-11-06 09:04:49.415901] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.207 [2024-11-06 09:04:49.415924] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.207 [2024-11-06 09:04:49.415938] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.207 [2024-11-06 09:04:49.418750] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.207 [2024-11-06 09:04:49.428194] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.207 [2024-11-06 09:04:49.428516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.207 [2024-11-06 09:04:49.428543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.207 [2024-11-06 09:04:49.428559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.207 [2024-11-06 09:04:49.428777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.207 [2024-11-06 09:04:49.429021] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.207 [2024-11-06 09:04:49.429044] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.207 [2024-11-06 09:04:49.429058] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.207 [2024-11-06 09:04:49.431960] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.207 [2024-11-06 09:04:49.441223] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.207 [2024-11-06 09:04:49.441629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.207 [2024-11-06 09:04:49.441657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.207 [2024-11-06 09:04:49.441673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.207 [2024-11-06 09:04:49.441922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.207 [2024-11-06 09:04:49.442139] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.207 [2024-11-06 09:04:49.442159] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.207 [2024-11-06 09:04:49.442173] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.207 [2024-11-06 09:04:49.445056] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.207 [2024-11-06 09:04:49.454238] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.207 [2024-11-06 09:04:49.454581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.207 [2024-11-06 09:04:49.454609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.207 [2024-11-06 09:04:49.454625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.207 [2024-11-06 09:04:49.454870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.207 [2024-11-06 09:04:49.455066] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.207 [2024-11-06 09:04:49.455086] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.207 [2024-11-06 09:04:49.455100] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.207 [2024-11-06 09:04:49.457887] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.207 [2024-11-06 09:04:49.467237] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.207 [2024-11-06 09:04:49.467584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.207 [2024-11-06 09:04:49.467612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.207 [2024-11-06 09:04:49.467633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.207 [2024-11-06 09:04:49.467881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.207 [2024-11-06 09:04:49.468097] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.207 [2024-11-06 09:04:49.468118] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.207 [2024-11-06 09:04:49.468132] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.207 [2024-11-06 09:04:49.471034] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.208 [2024-11-06 09:04:49.480297] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.208 [2024-11-06 09:04:49.480638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.208 [2024-11-06 09:04:49.480666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.208 [2024-11-06 09:04:49.480683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.208 [2024-11-06 09:04:49.480932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.208 [2024-11-06 09:04:49.481170] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.208 [2024-11-06 09:04:49.481192] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.208 [2024-11-06 09:04:49.481206] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.208 [2024-11-06 09:04:49.484419] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.471 [2024-11-06 09:04:49.494261] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.471 [2024-11-06 09:04:49.494628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.471 [2024-11-06 09:04:49.494658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.471 [2024-11-06 09:04:49.494674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.471 [2024-11-06 09:04:49.494924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.471 [2024-11-06 09:04:49.495162] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.471 [2024-11-06 09:04:49.495183] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.471 [2024-11-06 09:04:49.495198] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.471 [2024-11-06 09:04:49.498373] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.471 [2024-11-06 09:04:49.507461] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.472 [2024-11-06 09:04:49.507868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.472 [2024-11-06 09:04:49.507897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.472 [2024-11-06 09:04:49.507914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.472 [2024-11-06 09:04:49.508150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.472 [2024-11-06 09:04:49.508360] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.472 [2024-11-06 09:04:49.508380] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.472 [2024-11-06 09:04:49.508393] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.472 [2024-11-06 09:04:49.511321] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.472 [2024-11-06 09:04:49.520780] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.472 [2024-11-06 09:04:49.521160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.472 [2024-11-06 09:04:49.521188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.472 [2024-11-06 09:04:49.521204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.472 [2024-11-06 09:04:49.521421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.472 [2024-11-06 09:04:49.521626] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.472 [2024-11-06 09:04:49.521645] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.472 [2024-11-06 09:04:49.521658] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.472 [2024-11-06 09:04:49.524619] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.472 [2024-11-06 09:04:49.534168] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.472 [2024-11-06 09:04:49.534493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.472 [2024-11-06 09:04:49.534521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.472 [2024-11-06 09:04:49.534537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.472 [2024-11-06 09:04:49.534755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.472 [2024-11-06 09:04:49.535002] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.473 [2024-11-06 09:04:49.535025] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.473 [2024-11-06 09:04:49.535040] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.473 [2024-11-06 09:04:49.538038] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.473 [2024-11-06 09:04:49.547394] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.473 [2024-11-06 09:04:49.547798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.473 [2024-11-06 09:04:49.547847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.473 [2024-11-06 09:04:49.547866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.473 [2024-11-06 09:04:49.548119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.473 [2024-11-06 09:04:49.548341] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.473 [2024-11-06 09:04:49.548361] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.473 [2024-11-06 09:04:49.548378] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.473 [2024-11-06 09:04:49.551317] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.473 [2024-11-06 09:04:49.560404] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.473 [2024-11-06 09:04:49.560808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.473 [2024-11-06 09:04:49.560841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.473 [2024-11-06 09:04:49.560876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.473 [2024-11-06 09:04:49.561114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.473 [2024-11-06 09:04:49.561319] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.473 [2024-11-06 09:04:49.561339] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.473 [2024-11-06 09:04:49.561352] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.473 [2024-11-06 09:04:49.564281] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.474 [2024-11-06 09:04:49.573379] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.474 [2024-11-06 09:04:49.573724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.474 [2024-11-06 09:04:49.573752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.474 [2024-11-06 09:04:49.573768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.474 [2024-11-06 09:04:49.574037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.474 [2024-11-06 09:04:49.574265] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.474 [2024-11-06 09:04:49.574285] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.474 [2024-11-06 09:04:49.574298] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.474 [2024-11-06 09:04:49.577185] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.474 7462.00 IOPS, 29.15 MiB/s [2024-11-06T08:04:49.763Z] [2024-11-06 09:04:49.586479] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.474 [2024-11-06 09:04:49.586888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.474 [2024-11-06 09:04:49.586917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.474 [2024-11-06 09:04:49.586933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.474 [2024-11-06 09:04:49.587169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.474 [2024-11-06 09:04:49.587376] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.474 [2024-11-06 09:04:49.587395] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.474 [2024-11-06 09:04:49.587408] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.474 [2024-11-06 09:04:49.590321] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.474 [2024-11-06 09:04:49.599612] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.474 [2024-11-06 09:04:49.599990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.474 [2024-11-06 09:04:49.600018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.474 [2024-11-06 09:04:49.600033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.474 [2024-11-06 09:04:49.600250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.475 [2024-11-06 09:04:49.600457] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.475 [2024-11-06 09:04:49.600477] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.475 [2024-11-06 09:04:49.600490] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.475 [2024-11-06 09:04:49.603301] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.475 [2024-11-06 09:04:49.612641] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.475 [2024-11-06 09:04:49.612987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.475 [2024-11-06 09:04:49.613015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.475 [2024-11-06 09:04:49.613030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.475 [2024-11-06 09:04:49.613266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.475 [2024-11-06 09:04:49.613457] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.475 [2024-11-06 09:04:49.613476] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.475 [2024-11-06 09:04:49.613489] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.475 [2024-11-06 09:04:49.616304] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.475 [2024-11-06 09:04:49.625637] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.475 [2024-11-06 09:04:49.625950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.475 [2024-11-06 09:04:49.625978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.475 [2024-11-06 09:04:49.625994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.475 [2024-11-06 09:04:49.626212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.475 [2024-11-06 09:04:49.626419] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.475 [2024-11-06 09:04:49.626439] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.475 [2024-11-06 09:04:49.626452] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.475 [2024-11-06 09:04:49.629365] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.475 [2024-11-06 09:04:49.638619] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.476 [2024-11-06 09:04:49.639034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.476 [2024-11-06 09:04:49.639062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.476 [2024-11-06 09:04:49.639084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.476 [2024-11-06 09:04:49.639320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.476 [2024-11-06 09:04:49.639525] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.476 [2024-11-06 09:04:49.639545] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.476 [2024-11-06 09:04:49.639558] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.476 [2024-11-06 09:04:49.642484] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.476 [2024-11-06 09:04:49.651729] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.476 [2024-11-06 09:04:49.652068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.476 [2024-11-06 09:04:49.652097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.476 [2024-11-06 09:04:49.652113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.476 [2024-11-06 09:04:49.652347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.476 [2024-11-06 09:04:49.652554] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.476 [2024-11-06 09:04:49.652573] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.476 [2024-11-06 09:04:49.652586] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.476 [2024-11-06 09:04:49.655484] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.476 [2024-11-06 09:04:49.664728] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.476 [2024-11-06 09:04:49.665103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.476 [2024-11-06 09:04:49.665131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.476 [2024-11-06 09:04:49.665162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.476 [2024-11-06 09:04:49.665393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.476 [2024-11-06 09:04:49.665583] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.477 [2024-11-06 09:04:49.665603] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.477 [2024-11-06 09:04:49.665616] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.477 [2024-11-06 09:04:49.668545] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.477 [2024-11-06 09:04:49.677786] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.477 [2024-11-06 09:04:49.678157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.477 [2024-11-06 09:04:49.678201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.477 [2024-11-06 09:04:49.678217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.477 [2024-11-06 09:04:49.678451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.477 [2024-11-06 09:04:49.678646] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.477 [2024-11-06 09:04:49.678666] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.477 [2024-11-06 09:04:49.678679] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.477 [2024-11-06 09:04:49.681593] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.477 [2024-11-06 09:04:49.690841] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.477 [2024-11-06 09:04:49.691184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.482 [2024-11-06 09:04:49.691211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.482 [2024-11-06 09:04:49.691227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.483 [2024-11-06 09:04:49.691441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.483 [2024-11-06 09:04:49.691646] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.483 [2024-11-06 09:04:49.691666] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.483 [2024-11-06 09:04:49.691679] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.483 [2024-11-06 09:04:49.694645] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.483 [2024-11-06 09:04:49.703930] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.483 [2024-11-06 09:04:49.704308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.483 [2024-11-06 09:04:49.704335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.483 [2024-11-06 09:04:49.704350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.483 [2024-11-06 09:04:49.704571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.483 [2024-11-06 09:04:49.704777] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.483 [2024-11-06 09:04:49.704797] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.483 [2024-11-06 09:04:49.704810] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.483 [2024-11-06 09:04:49.707708] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.483 [2024-11-06 09:04:49.717038] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.483 [2024-11-06 09:04:49.717386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.483 [2024-11-06 09:04:49.717414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.483 [2024-11-06 09:04:49.717429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.483 [2024-11-06 09:04:49.717647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.483 [2024-11-06 09:04:49.717878] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.483 [2024-11-06 09:04:49.717899] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.483 [2024-11-06 09:04:49.717917] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.483 [2024-11-06 09:04:49.720705] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.483 [2024-11-06 09:04:49.730046] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.483 [2024-11-06 09:04:49.730389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.483 [2024-11-06 09:04:49.730417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.483 [2024-11-06 09:04:49.730432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.483 [2024-11-06 09:04:49.730668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.483 [2024-11-06 09:04:49.730899] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.483 [2024-11-06 09:04:49.730936] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.483 [2024-11-06 09:04:49.730951] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.483 [2024-11-06 09:04:49.734121] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.483 [2024-11-06 09:04:49.743237] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.483 [2024-11-06 09:04:49.743643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.483 [2024-11-06 09:04:49.743670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.483 [2024-11-06 09:04:49.743686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.483 [2024-11-06 09:04:49.743932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.483 [2024-11-06 09:04:49.744156] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.483 [2024-11-06 09:04:49.744176] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.483 [2024-11-06 09:04:49.744190] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.483 [2024-11-06 09:04:49.747117] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.483 [2024-11-06 09:04:49.756614] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.483 [2024-11-06 09:04:49.756997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.483 [2024-11-06 09:04:49.757026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.483 [2024-11-06 09:04:49.757042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.483 [2024-11-06 09:04:49.757285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.483 [2024-11-06 09:04:49.757491] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.483 [2024-11-06 09:04:49.757510] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.483 [2024-11-06 09:04:49.757524] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.746 [2024-11-06 09:04:49.760672] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.746 [2024-11-06 09:04:49.769702] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.746 [2024-11-06 09:04:49.770061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.746 [2024-11-06 09:04:49.770089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.746 [2024-11-06 09:04:49.770106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.746 [2024-11-06 09:04:49.770343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.746 [2024-11-06 09:04:49.770534] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.746 [2024-11-06 09:04:49.770554] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.746 [2024-11-06 09:04:49.770566] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.746 [2024-11-06 09:04:49.773480] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.746 [2024-11-06 09:04:49.783077] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.746 [2024-11-06 09:04:49.783499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.746 [2024-11-06 09:04:49.783527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.746 [2024-11-06 09:04:49.783543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.746 [2024-11-06 09:04:49.783779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.746 [2024-11-06 09:04:49.784015] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.746 [2024-11-06 09:04:49.784037] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.746 [2024-11-06 09:04:49.784051] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.746 [2024-11-06 09:04:49.786956] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.746 [2024-11-06 09:04:49.796241] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.746 [2024-11-06 09:04:49.796648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.746 [2024-11-06 09:04:49.796676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.746 [2024-11-06 09:04:49.796692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.746 [2024-11-06 09:04:49.796939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.746 [2024-11-06 09:04:49.797157] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.746 [2024-11-06 09:04:49.797177] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.746 [2024-11-06 09:04:49.797205] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.746 [2024-11-06 09:04:49.800121] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.746 [2024-11-06 09:04:49.809367] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.746 [2024-11-06 09:04:49.809712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.746 [2024-11-06 09:04:49.809740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.746 [2024-11-06 09:04:49.809761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.746 [2024-11-06 09:04:49.810026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.746 [2024-11-06 09:04:49.810238] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.746 [2024-11-06 09:04:49.810258] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.746 [2024-11-06 09:04:49.810272] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.746 [2024-11-06 09:04:49.813164] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.746 [2024-11-06 09:04:49.822586] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.746 [2024-11-06 09:04:49.822976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.746 [2024-11-06 09:04:49.823006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.746 [2024-11-06 09:04:49.823037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.746 [2024-11-06 09:04:49.823280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.746 [2024-11-06 09:04:49.823526] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.746 [2024-11-06 09:04:49.823547] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.746 [2024-11-06 09:04:49.823561] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.746 [2024-11-06 09:04:49.826708] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.746 [2024-11-06 09:04:49.835808] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.746 [2024-11-06 09:04:49.836179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.746 [2024-11-06 09:04:49.836207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.746 [2024-11-06 09:04:49.836222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.746 [2024-11-06 09:04:49.836454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.746 [2024-11-06 09:04:49.836666] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.746 [2024-11-06 09:04:49.836686] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.746 [2024-11-06 09:04:49.836700] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.746 [2024-11-06 09:04:49.840015] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.746 [2024-11-06 09:04:49.849552] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.746 [2024-11-06 09:04:49.849940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.746 [2024-11-06 09:04:49.849970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.746 [2024-11-06 09:04:49.849986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.746 [2024-11-06 09:04:49.850222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.746 [2024-11-06 09:04:49.850449] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.746 [2024-11-06 09:04:49.850470] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.746 [2024-11-06 09:04:49.850483] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.746 [2024-11-06 09:04:49.853747] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.746 [2024-11-06 09:04:49.863299] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.746 [2024-11-06 09:04:49.863671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.746 [2024-11-06 09:04:49.863700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.746 [2024-11-06 09:04:49.863716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.746 [2024-11-06 09:04:49.863943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.746 [2024-11-06 09:04:49.864177] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.746 [2024-11-06 09:04:49.864198] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.864211] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.867405] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.876971] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.877446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.747 [2024-11-06 09:04:49.877494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.747 [2024-11-06 09:04:49.877511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.747 [2024-11-06 09:04:49.877755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.747 [2024-11-06 09:04:49.877999] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.747 [2024-11-06 09:04:49.878023] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.878038] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.881408] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.890654] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.890991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.747 [2024-11-06 09:04:49.891021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.747 [2024-11-06 09:04:49.891038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.747 [2024-11-06 09:04:49.891269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.747 [2024-11-06 09:04:49.891518] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.747 [2024-11-06 09:04:49.891540] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.891558] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.894893] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.904377] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.904802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.747 [2024-11-06 09:04:49.904840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.747 [2024-11-06 09:04:49.904858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.747 [2024-11-06 09:04:49.905075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.747 [2024-11-06 09:04:49.905329] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.747 [2024-11-06 09:04:49.905349] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.905364] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.908667] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.917631] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.917998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.747 [2024-11-06 09:04:49.918028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.747 [2024-11-06 09:04:49.918045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.747 [2024-11-06 09:04:49.918301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.747 [2024-11-06 09:04:49.918492] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.747 [2024-11-06 09:04:49.918511] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.918525] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.921567] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.930992] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.931436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.747 [2024-11-06 09:04:49.931464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.747 [2024-11-06 09:04:49.931480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.747 [2024-11-06 09:04:49.931717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.747 [2024-11-06 09:04:49.931957] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.747 [2024-11-06 09:04:49.931978] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.931993] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.934942] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.944153] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.944459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.747 [2024-11-06 09:04:49.944549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.747 [2024-11-06 09:04:49.944565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.747 [2024-11-06 09:04:49.944796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.747 [2024-11-06 09:04:49.945010] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.747 [2024-11-06 09:04:49.945031] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.945045] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.947942] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.957395] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.957893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.747 [2024-11-06 09:04:49.957922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.747 [2024-11-06 09:04:49.957939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.747 [2024-11-06 09:04:49.958202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.747 [2024-11-06 09:04:49.958392] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.747 [2024-11-06 09:04:49.958411] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.958424] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.961381] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.970749] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.971171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.747 [2024-11-06 09:04:49.971201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.747 [2024-11-06 09:04:49.971218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.747 [2024-11-06 09:04:49.971456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.747 [2024-11-06 09:04:49.971649] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.747 [2024-11-06 09:04:49.971669] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.971683] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.974683] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.984032] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.984423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.747 [2024-11-06 09:04:49.984513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.747 [2024-11-06 09:04:49.984534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.747 [2024-11-06 09:04:49.984765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.747 [2024-11-06 09:04:49.984983] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.747 [2024-11-06 09:04:49.985005] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.747 [2024-11-06 09:04:49.985019] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.747 [2024-11-06 09:04:49.988272] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.747 [2024-11-06 09:04:49.997310] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.747 [2024-11-06 09:04:49.997657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.748 [2024-11-06 09:04:49.997685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.748 [2024-11-06 09:04:49.997701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.748 [2024-11-06 09:04:49.997968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.748 [2024-11-06 09:04:49.998188] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.748 [2024-11-06 09:04:49.998207] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.748 [2024-11-06 09:04:49.998221] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.748 [2024-11-06 09:04:50.001245] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.748 [2024-11-06 09:04:50.011096] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.748 [2024-11-06 09:04:50.011455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.748 [2024-11-06 09:04:50.011487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.748 [2024-11-06 09:04:50.011504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.748 [2024-11-06 09:04:50.011749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.748 [2024-11-06 09:04:50.011999] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.748 [2024-11-06 09:04:50.012023] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.748 [2024-11-06 09:04:50.012038] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.748 [2024-11-06 09:04:50.015340] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:36.748 [2024-11-06 09:04:50.024601] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:36.748 [2024-11-06 09:04:50.025005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.748 [2024-11-06 09:04:50.025037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:36.748 [2024-11-06 09:04:50.025056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:36.748 [2024-11-06 09:04:50.025302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:36.748 [2024-11-06 09:04:50.025518] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:36.748 [2024-11-06 09:04:50.025540] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:36.748 [2024-11-06 09:04:50.025555] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:36.748 [2024-11-06 09:04:50.028731] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.007 [2024-11-06 09:04:50.038212] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.007 [2024-11-06 09:04:50.038581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.007 [2024-11-06 09:04:50.038611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.007 [2024-11-06 09:04:50.038627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.007 [2024-11-06 09:04:50.038881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.007 [2024-11-06 09:04:50.039118] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.007 [2024-11-06 09:04:50.039141] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.007 [2024-11-06 09:04:50.039156] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.007 [2024-11-06 09:04:50.042502] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.007 [2024-11-06 09:04:50.051678] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.007 [2024-11-06 09:04:50.052060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.007 [2024-11-06 09:04:50.052090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.007 [2024-11-06 09:04:50.052107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.007 [2024-11-06 09:04:50.052360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.007 [2024-11-06 09:04:50.052566] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.007 [2024-11-06 09:04:50.052586] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.007 [2024-11-06 09:04:50.052599] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.007 [2024-11-06 09:04:50.055739] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.007 [2024-11-06 09:04:50.065078] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.007 [2024-11-06 09:04:50.065523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.007 [2024-11-06 09:04:50.065577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.007 [2024-11-06 09:04:50.065593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.007 [2024-11-06 09:04:50.065884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.007 [2024-11-06 09:04:50.066129] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.007 [2024-11-06 09:04:50.066164] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.007 [2024-11-06 09:04:50.066178] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.007 [2024-11-06 09:04:50.069239] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.007 [2024-11-06 09:04:50.078392] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.007 [2024-11-06 09:04:50.078868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.007 [2024-11-06 09:04:50.078912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.007 [2024-11-06 09:04:50.078928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.007 [2024-11-06 09:04:50.079176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.007 [2024-11-06 09:04:50.079381] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.007 [2024-11-06 09:04:50.079402] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.007 [2024-11-06 09:04:50.079415] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.007 [2024-11-06 09:04:50.082324] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.007 [2024-11-06 09:04:50.091918] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.007 [2024-11-06 09:04:50.092305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.007 [2024-11-06 09:04:50.092334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.007 [2024-11-06 09:04:50.092350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.007 [2024-11-06 09:04:50.092586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.007 [2024-11-06 09:04:50.092791] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.007 [2024-11-06 09:04:50.092826] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.007 [2024-11-06 09:04:50.092855] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.007 [2024-11-06 09:04:50.095943] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.007 [2024-11-06 09:04:50.105317] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.007 [2024-11-06 09:04:50.105647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.007 [2024-11-06 09:04:50.105676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.007 [2024-11-06 09:04:50.105692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.007 [2024-11-06 09:04:50.105946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.007 [2024-11-06 09:04:50.106196] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.007 [2024-11-06 09:04:50.106218] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.007 [2024-11-06 09:04:50.106232] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.007 [2024-11-06 09:04:50.109416] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.007 [2024-11-06 09:04:50.118759] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.119121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.119165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.119183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.119428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.119653] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.119673] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.008 [2024-11-06 09:04:50.119687] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.008 [2024-11-06 09:04:50.122967] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.008 [2024-11-06 09:04:50.132069] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.132533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.132561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.132577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.132814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.133043] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.133066] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.008 [2024-11-06 09:04:50.133080] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.008 [2024-11-06 09:04:50.136027] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.008 [2024-11-06 09:04:50.145488] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.145844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.145890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.145907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.146150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.146368] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.146390] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.008 [2024-11-06 09:04:50.146404] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.008 [2024-11-06 09:04:50.149624] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.008 [2024-11-06 09:04:50.158748] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.159173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.159202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.159219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.159464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.159673] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.159694] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.008 [2024-11-06 09:04:50.159708] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.008 [2024-11-06 09:04:50.162967] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.008 [2024-11-06 09:04:50.172148] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.172606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.172661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.172678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.172921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.173143] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.173179] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.008 [2024-11-06 09:04:50.173194] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.008 [2024-11-06 09:04:50.176327] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.008 [2024-11-06 09:04:50.185648] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.186037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.186090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.186106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.186335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.186526] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.186547] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.008 [2024-11-06 09:04:50.186561] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.008 [2024-11-06 09:04:50.189674] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.008 [2024-11-06 09:04:50.199013] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.199437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.199466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.199482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.199720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.199973] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.200017] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.008 [2024-11-06 09:04:50.200034] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.008 [2024-11-06 09:04:50.203126] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.008 [2024-11-06 09:04:50.212449] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.212816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.212902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.212921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.213153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.213359] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.213380] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.008 [2024-11-06 09:04:50.213395] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.008 [2024-11-06 09:04:50.216576] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.008 [2024-11-06 09:04:50.225948] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.226351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.226379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.226396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.226633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.226899] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.226922] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.008 [2024-11-06 09:04:50.226937] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.008 [2024-11-06 09:04:50.229998] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.008 [2024-11-06 09:04:50.239414] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.008 [2024-11-06 09:04:50.239870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.008 [2024-11-06 09:04:50.239915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.008 [2024-11-06 09:04:50.239933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.008 [2024-11-06 09:04:50.240164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.008 [2024-11-06 09:04:50.240371] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.008 [2024-11-06 09:04:50.240393] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.009 [2024-11-06 09:04:50.240406] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.009 [2024-11-06 09:04:50.243544] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.009 [2024-11-06 09:04:50.253159] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.009 [2024-11-06 09:04:50.253550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.009 [2024-11-06 09:04:50.253595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.009 [2024-11-06 09:04:50.253612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.009 [2024-11-06 09:04:50.253881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.009 [2024-11-06 09:04:50.254111] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.009 [2024-11-06 09:04:50.254135] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.009 [2024-11-06 09:04:50.254151] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.009 [2024-11-06 09:04:50.257681] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.009 [2024-11-06 09:04:50.266696] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.009 [2024-11-06 09:04:50.267080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.009 [2024-11-06 09:04:50.267110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.009 [2024-11-06 09:04:50.267142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.009 [2024-11-06 09:04:50.267372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.009 [2024-11-06 09:04:50.267593] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.009 [2024-11-06 09:04:50.267615] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.009 [2024-11-06 09:04:50.267630] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.009 [2024-11-06 09:04:50.270814] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.009 [2024-11-06 09:04:50.280150] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.009 [2024-11-06 09:04:50.280507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.009 [2024-11-06 09:04:50.280598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.009 [2024-11-06 09:04:50.280615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.009 [2024-11-06 09:04:50.280872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.009 [2024-11-06 09:04:50.281084] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.009 [2024-11-06 09:04:50.281105] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.009 [2024-11-06 09:04:50.281119] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.009 [2024-11-06 09:04:50.284262] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.009 [2024-11-06 09:04:50.293859] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.009 [2024-11-06 09:04:50.294335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.009 [2024-11-06 09:04:50.294380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.009 [2024-11-06 09:04:50.294396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.009 [2024-11-06 09:04:50.294633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.009 [2024-11-06 09:04:50.294884] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.009 [2024-11-06 09:04:50.294909] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.009 [2024-11-06 09:04:50.294939] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.267 [2024-11-06 09:04:50.298222] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.267 [2024-11-06 09:04:50.307341] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.267 [2024-11-06 09:04:50.307815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.267 [2024-11-06 09:04:50.307852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.267 [2024-11-06 09:04:50.307870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.267 [2024-11-06 09:04:50.308111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.267 [2024-11-06 09:04:50.308316] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.267 [2024-11-06 09:04:50.308337] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.267 [2024-11-06 09:04:50.308351] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.267 [2024-11-06 09:04:50.311435] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.267 [2024-11-06 09:04:50.320758] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.267 [2024-11-06 09:04:50.321278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.267 [2024-11-06 09:04:50.321348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.267 [2024-11-06 09:04:50.321364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.267 [2024-11-06 09:04:50.321595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.267 [2024-11-06 09:04:50.321800] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.267 [2024-11-06 09:04:50.321821] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.267 [2024-11-06 09:04:50.321858] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.267 [2024-11-06 09:04:50.324968] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.267 [2024-11-06 09:04:50.334091] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.267 [2024-11-06 09:04:50.334515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.267 [2024-11-06 09:04:50.334545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.267 [2024-11-06 09:04:50.334562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.267 [2024-11-06 09:04:50.334805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.267 [2024-11-06 09:04:50.335058] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.267 [2024-11-06 09:04:50.335082] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.267 [2024-11-06 09:04:50.335097] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.267 [2024-11-06 09:04:50.338169] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.267 [2024-11-06 09:04:50.347458] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.267 [2024-11-06 09:04:50.347865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.267 [2024-11-06 09:04:50.347920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.267 [2024-11-06 09:04:50.347937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.267 [2024-11-06 09:04:50.348167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.267 [2024-11-06 09:04:50.348409] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.267 [2024-11-06 09:04:50.348430] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.267 [2024-11-06 09:04:50.348444] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.267 [2024-11-06 09:04:50.351505] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.267 [2024-11-06 09:04:50.360823] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.267 [2024-11-06 09:04:50.361207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.267 [2024-11-06 09:04:50.361236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.267 [2024-11-06 09:04:50.361252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.267 [2024-11-06 09:04:50.361489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.267 [2024-11-06 09:04:50.361696] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.267 [2024-11-06 09:04:50.361717] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.267 [2024-11-06 09:04:50.361730] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.267 [2024-11-06 09:04:50.364937] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.267 [2024-11-06 09:04:50.374225] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.374594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.374623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.374639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.374885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.375093] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.375134] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.375150] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.268 [2024-11-06 09:04:50.378149] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.268 [2024-11-06 09:04:50.387505] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.387977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.388007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.388024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.388269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.388459] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.388480] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.388493] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.268 [2024-11-06 09:04:50.391597] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.268 [2024-11-06 09:04:50.400768] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.401153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.401197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.401214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.401468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.401689] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.401710] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.401723] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.268 [2024-11-06 09:04:50.405028] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.268 [2024-11-06 09:04:50.414088] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.414435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.414463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.414479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.414716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.414977] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.415001] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.415016] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.268 [2024-11-06 09:04:50.418254] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.268 [2024-11-06 09:04:50.427526] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.427877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.427907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.427923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.428146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.428352] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.428374] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.428402] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.268 [2024-11-06 09:04:50.431553] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.268 [2024-11-06 09:04:50.440752] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.441142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.441170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.441186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.441406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.441607] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.441629] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.441643] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.268 [2024-11-06 09:04:50.444823] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.268 [2024-11-06 09:04:50.454139] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.454550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.454594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.454612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.454885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.455124] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.455146] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.455159] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.268 [2024-11-06 09:04:50.458201] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.268 [2024-11-06 09:04:50.467470] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.467881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.467911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.467928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.468164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.468370] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.468390] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.468404] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.268 [2024-11-06 09:04:50.471494] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.268 [2024-11-06 09:04:50.480839] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.481220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.481248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.481264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.481481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.481687] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.481708] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.481722] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.268 [2024-11-06 09:04:50.484938] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.268 [2024-11-06 09:04:50.494233] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.268 [2024-11-06 09:04:50.494581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.268 [2024-11-06 09:04:50.494609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.268 [2024-11-06 09:04:50.494626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.268 [2024-11-06 09:04:50.494870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.268 [2024-11-06 09:04:50.495141] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.268 [2024-11-06 09:04:50.495165] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.268 [2024-11-06 09:04:50.495181] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.269 [2024-11-06 09:04:50.498457] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.269 [2024-11-06 09:04:50.507977] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.269 [2024-11-06 09:04:50.508460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.269 [2024-11-06 09:04:50.508490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.269 [2024-11-06 09:04:50.508507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.269 [2024-11-06 09:04:50.508756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.269 [2024-11-06 09:04:50.508996] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.269 [2024-11-06 09:04:50.509020] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.269 [2024-11-06 09:04:50.509034] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.269 [2024-11-06 09:04:50.512186] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.269 [2024-11-06 09:04:50.521431] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.269 [2024-11-06 09:04:50.521878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.269 [2024-11-06 09:04:50.521908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.269 [2024-11-06 09:04:50.521925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.269 [2024-11-06 09:04:50.522168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.269 [2024-11-06 09:04:50.522374] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.269 [2024-11-06 09:04:50.522395] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.269 [2024-11-06 09:04:50.522408] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.269 [2024-11-06 09:04:50.525611] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.269 [2024-11-06 09:04:50.534854] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.269 [2024-11-06 09:04:50.535284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.269 [2024-11-06 09:04:50.535313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.269 [2024-11-06 09:04:50.535329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.269 [2024-11-06 09:04:50.535576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.269 [2024-11-06 09:04:50.535810] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.269 [2024-11-06 09:04:50.535857] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.269 [2024-11-06 09:04:50.535875] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.269 [2024-11-06 09:04:50.539026] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.269 [2024-11-06 09:04:50.548252] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.269 [2024-11-06 09:04:50.548663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.269 [2024-11-06 09:04:50.548692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.269 [2024-11-06 09:04:50.548709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.269 [2024-11-06 09:04:50.548952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.269 [2024-11-06 09:04:50.549199] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.269 [2024-11-06 09:04:50.549224] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.269 [2024-11-06 09:04:50.549238] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.269 [2024-11-06 09:04:50.552324] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.528 [2024-11-06 09:04:50.561914] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.528 [2024-11-06 09:04:50.562361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-06 09:04:50.562390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.528 [2024-11-06 09:04:50.562405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.528 [2024-11-06 09:04:50.562641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.528 [2024-11-06 09:04:50.562890] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.528 [2024-11-06 09:04:50.562913] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.528 [2024-11-06 09:04:50.562928] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.528 [2024-11-06 09:04:50.565829] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.528 [2024-11-06 09:04:50.575013] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.528 [2024-11-06 09:04:50.575309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-06 09:04:50.575352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.528 [2024-11-06 09:04:50.575368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.528 [2024-11-06 09:04:50.575587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.528 [2024-11-06 09:04:50.575793] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.528 [2024-11-06 09:04:50.575813] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.528 [2024-11-06 09:04:50.575826] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.528 [2024-11-06 09:04:50.578640] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.528 5596.50 IOPS, 21.86 MiB/s [2024-11-06T08:04:50.817Z] [2024-11-06 09:04:50.588211] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.528 [2024-11-06 09:04:50.588670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-06 09:04:50.588724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.528 [2024-11-06 09:04:50.588740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.528 [2024-11-06 09:04:50.589007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.528 [2024-11-06 09:04:50.589228] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.528 [2024-11-06 09:04:50.589250] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.528 [2024-11-06 09:04:50.589263] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.528 [2024-11-06 09:04:50.592158] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.528 [2024-11-06 09:04:50.601321] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.528 [2024-11-06 09:04:50.601729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-06 09:04:50.601758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.528 [2024-11-06 09:04:50.601774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.528 [2024-11-06 09:04:50.602022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.528 [2024-11-06 09:04:50.602245] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.528 [2024-11-06 09:04:50.602266] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.528 [2024-11-06 09:04:50.602279] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.528 [2024-11-06 09:04:50.605170] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.528 [2024-11-06 09:04:50.614342] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.528 [2024-11-06 09:04:50.614684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-06 09:04:50.614712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.528 [2024-11-06 09:04:50.614728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.528 [2024-11-06 09:04:50.614998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.528 [2024-11-06 09:04:50.615208] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.528 [2024-11-06 09:04:50.615229] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.528 [2024-11-06 09:04:50.615243] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.528 [2024-11-06 09:04:50.618117] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.528 [2024-11-06 09:04:50.627421] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.528 [2024-11-06 09:04:50.627765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-06 09:04:50.627793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.528 [2024-11-06 09:04:50.627810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.528 [2024-11-06 09:04:50.628076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.528 [2024-11-06 09:04:50.628300] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.528 [2024-11-06 09:04:50.628321] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.528 [2024-11-06 09:04:50.628335] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.528 [2024-11-06 09:04:50.631221] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.528 [2024-11-06 09:04:50.640557] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.528 [2024-11-06 09:04:50.640908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.528 [2024-11-06 09:04:50.640936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.528 [2024-11-06 09:04:50.640951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.528 [2024-11-06 09:04:50.641184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.528 [2024-11-06 09:04:50.641389] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.528 [2024-11-06 09:04:50.641410] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.528 [2024-11-06 09:04:50.641423] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.528 [2024-11-06 09:04:50.644354] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.653676] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.654028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.654057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.654072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.654309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.654516] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.529 [2024-11-06 09:04:50.654536] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.529 [2024-11-06 09:04:50.654550] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.529 [2024-11-06 09:04:50.657450] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.666741] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.667238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.667267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.667282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.667523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.667713] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.529 [2024-11-06 09:04:50.667734] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.529 [2024-11-06 09:04:50.667747] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.529 [2024-11-06 09:04:50.670665] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.679737] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.680151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.680180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.680196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.680438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.680644] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.529 [2024-11-06 09:04:50.680665] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.529 [2024-11-06 09:04:50.680679] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.529 [2024-11-06 09:04:50.683598] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.692943] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.693264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.693334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.693351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.693585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.693775] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.529 [2024-11-06 09:04:50.693795] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.529 [2024-11-06 09:04:50.693808] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.529 [2024-11-06 09:04:50.696742] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.706037] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.706430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.706488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.706504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.706750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.706972] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.529 [2024-11-06 09:04:50.706994] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.529 [2024-11-06 09:04:50.707008] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.529 [2024-11-06 09:04:50.709894] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.719285] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.719628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.719656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.719671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.719901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.720103] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.529 [2024-11-06 09:04:50.720142] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.529 [2024-11-06 09:04:50.720156] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.529 [2024-11-06 09:04:50.723045] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.732494] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.732903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.732932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.732948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.733182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.733372] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.529 [2024-11-06 09:04:50.733394] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.529 [2024-11-06 09:04:50.733407] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.529 [2024-11-06 09:04:50.736337] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.745518] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.745925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.745953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.745970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.746206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.746413] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.529 [2024-11-06 09:04:50.746434] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.529 [2024-11-06 09:04:50.746448] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.529 [2024-11-06 09:04:50.749477] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.758947] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.759294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.759324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.759341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.759566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.759776] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.529 [2024-11-06 09:04:50.759798] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.529 [2024-11-06 09:04:50.759812] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.529 [2024-11-06 09:04:50.762910] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.529 [2024-11-06 09:04:50.772065] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.529 [2024-11-06 09:04:50.772472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.529 [2024-11-06 09:04:50.772500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.529 [2024-11-06 09:04:50.772515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.529 [2024-11-06 09:04:50.772745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.529 [2024-11-06 09:04:50.772984] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.530 [2024-11-06 09:04:50.773006] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.530 [2024-11-06 09:04:50.773020] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.530 [2024-11-06 09:04:50.775910] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.530 [2024-11-06 09:04:50.785128] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.530 [2024-11-06 09:04:50.785473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-06 09:04:50.785502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.530 [2024-11-06 09:04:50.785517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.530 [2024-11-06 09:04:50.785754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.530 [2024-11-06 09:04:50.785988] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.530 [2024-11-06 09:04:50.786010] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.530 [2024-11-06 09:04:50.786023] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.530 [2024-11-06 09:04:50.788927] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.530 [2024-11-06 09:04:50.798304] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.530 [2024-11-06 09:04:50.798648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-06 09:04:50.798677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.530 [2024-11-06 09:04:50.798694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.530 [2024-11-06 09:04:50.798945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.530 [2024-11-06 09:04:50.799169] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.530 [2024-11-06 09:04:50.799190] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.530 [2024-11-06 09:04:50.799204] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.530 [2024-11-06 09:04:50.802074] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.530 [2024-11-06 09:04:50.811346] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.530 [2024-11-06 09:04:50.811799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.530 [2024-11-06 09:04:50.811865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.530 [2024-11-06 09:04:50.811882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.530 [2024-11-06 09:04:50.812123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.530 [2024-11-06 09:04:50.812313] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.530 [2024-11-06 09:04:50.812333] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.530 [2024-11-06 09:04:50.812346] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.530 [2024-11-06 09:04:50.815677] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.789 [2024-11-06 09:04:50.824821] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.789 [2024-11-06 09:04:50.825277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.789 [2024-11-06 09:04:50.825332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.789 [2024-11-06 09:04:50.825348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.789 [2024-11-06 09:04:50.825594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.789 [2024-11-06 09:04:50.825784] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.789 [2024-11-06 09:04:50.825804] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.789 [2024-11-06 09:04:50.825818] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.789 [2024-11-06 09:04:50.828786] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.789 [2024-11-06 09:04:50.837836] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.789 [2024-11-06 09:04:50.838147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.789 [2024-11-06 09:04:50.838220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.789 [2024-11-06 09:04:50.838236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.789 [2024-11-06 09:04:50.838468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.789 [2024-11-06 09:04:50.838673] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.789 [2024-11-06 09:04:50.838695] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.789 [2024-11-06 09:04:50.838708] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.789 [2024-11-06 09:04:50.841603] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.789 [2024-11-06 09:04:50.850943] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.789 [2024-11-06 09:04:50.851399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.789 [2024-11-06 09:04:50.851456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.789 [2024-11-06 09:04:50.851472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.789 [2024-11-06 09:04:50.851720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.789 [2024-11-06 09:04:50.851938] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.789 [2024-11-06 09:04:50.851961] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.789 [2024-11-06 09:04:50.851975] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.789 [2024-11-06 09:04:50.854987] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.789 [2024-11-06 09:04:50.864083] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.789 [2024-11-06 09:04:50.864510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.789 [2024-11-06 09:04:50.864539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.789 [2024-11-06 09:04:50.864555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.789 [2024-11-06 09:04:50.864792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.789 [2024-11-06 09:04:50.865019] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.789 [2024-11-06 09:04:50.865055] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.789 [2024-11-06 09:04:50.865069] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.789 [2024-11-06 09:04:50.867965] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.789 [2024-11-06 09:04:50.877246] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.789 [2024-11-06 09:04:50.877600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.789 [2024-11-06 09:04:50.877629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.789 [2024-11-06 09:04:50.877645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.789 [2024-11-06 09:04:50.877894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.789 [2024-11-06 09:04:50.878105] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.789 [2024-11-06 09:04:50.878125] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.878138] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:50.881031] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:50.890344] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:50.890699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:50.890788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:50.890804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:50.891046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:50.891254] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.790 [2024-11-06 09:04:50.891281] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.891294] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:50.894186] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:50.903437] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:50.903892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:50.903921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:50.903937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:50.904150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:50.904354] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.790 [2024-11-06 09:04:50.904375] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.904388] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:50.907289] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:50.916740] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:50.917209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:50.917239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:50.917271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:50.917509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:50.917723] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.790 [2024-11-06 09:04:50.917742] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.917755] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:50.920798] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:50.930425] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:50.930779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:50.930823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:50.930851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:50.931070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:50.931304] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.790 [2024-11-06 09:04:50.931339] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.931353] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:50.934575] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:50.944225] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:50.944613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:50.944651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:50.944685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:50.944942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:50.945181] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.790 [2024-11-06 09:04:50.945218] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.945233] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:50.948532] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:50.957812] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:50.958170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:50.958199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:50.958216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:50.958463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:50.958696] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.790 [2024-11-06 09:04:50.958718] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.958732] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:50.962025] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:50.971560] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:50.971906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:50.971935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:50.971952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:50.972184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:50.972438] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.790 [2024-11-06 09:04:50.972460] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.972474] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:50.975553] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:50.984999] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:50.985410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:50.985466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:50.985501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:50.985733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:50.985971] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.790 [2024-11-06 09:04:50.985994] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.986010] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:50.989050] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:50.998399] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:50.998807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:50.998861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:50.998879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:50.999097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:50.999332] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.790 [2024-11-06 09:04:50.999356] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.790 [2024-11-06 09:04:50.999385] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.790 [2024-11-06 09:04:51.002882] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.790 [2024-11-06 09:04:51.012032] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.790 [2024-11-06 09:04:51.012495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.790 [2024-11-06 09:04:51.012524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.790 [2024-11-06 09:04:51.012540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.790 [2024-11-06 09:04:51.012784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.790 [2024-11-06 09:04:51.013028] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.791 [2024-11-06 09:04:51.013052] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.791 [2024-11-06 09:04:51.013067] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.791 [2024-11-06 09:04:51.016362] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.791 [2024-11-06 09:04:51.025623] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.791 [2024-11-06 09:04:51.025960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.791 [2024-11-06 09:04:51.025990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.791 [2024-11-06 09:04:51.026007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.791 [2024-11-06 09:04:51.026242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.791 [2024-11-06 09:04:51.026491] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.791 [2024-11-06 09:04:51.026511] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.791 [2024-11-06 09:04:51.026525] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.791 [2024-11-06 09:04:51.029683] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.791 [2024-11-06 09:04:51.038936] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.791 [2024-11-06 09:04:51.039362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.791 [2024-11-06 09:04:51.039390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.791 [2024-11-06 09:04:51.039406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.791 [2024-11-06 09:04:51.039642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.791 [2024-11-06 09:04:51.039880] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.791 [2024-11-06 09:04:51.039903] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.791 [2024-11-06 09:04:51.039919] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.791 [2024-11-06 09:04:51.042949] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.791 [2024-11-06 09:04:51.052203] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.791 [2024-11-06 09:04:51.052559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.791 [2024-11-06 09:04:51.052615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.791 [2024-11-06 09:04:51.052648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.791 [2024-11-06 09:04:51.052892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.791 [2024-11-06 09:04:51.053115] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.791 [2024-11-06 09:04:51.053136] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.791 [2024-11-06 09:04:51.053151] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.791 [2024-11-06 09:04:51.056164] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:37.791 [2024-11-06 09:04:51.065397] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:37.791 [2024-11-06 09:04:51.065738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:37.791 [2024-11-06 09:04:51.065767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:37.791 [2024-11-06 09:04:51.065784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:37.791 [2024-11-06 09:04:51.066034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:37.791 [2024-11-06 09:04:51.066242] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:37.791 [2024-11-06 09:04:51.066261] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:37.791 [2024-11-06 09:04:51.066279] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:37.791 [2024-11-06 09:04:51.069196] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.050 [2024-11-06 09:04:51.079025] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.050 [2024-11-06 09:04:51.079407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.050 [2024-11-06 09:04:51.079435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.050 [2024-11-06 09:04:51.079455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.050 [2024-11-06 09:04:51.079693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.050 [2024-11-06 09:04:51.079941] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.050 [2024-11-06 09:04:51.079963] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.050 [2024-11-06 09:04:51.079977] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.050 [2024-11-06 09:04:51.083017] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.050 [2024-11-06 09:04:51.092248] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.050 [2024-11-06 09:04:51.092666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.050 [2024-11-06 09:04:51.092694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.050 [2024-11-06 09:04:51.092714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.050 [2024-11-06 09:04:51.092960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.050 [2024-11-06 09:04:51.093171] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.050 [2024-11-06 09:04:51.093191] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.050 [2024-11-06 09:04:51.093205] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.050 [2024-11-06 09:04:51.096068] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.050 [2024-11-06 09:04:51.105397] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.050 [2024-11-06 09:04:51.105709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.050 [2024-11-06 09:04:51.105736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.050 [2024-11-06 09:04:51.105752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.050 [2024-11-06 09:04:51.106018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.050 [2024-11-06 09:04:51.106231] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.050 [2024-11-06 09:04:51.106250] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.050 [2024-11-06 09:04:51.106263] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.050 [2024-11-06 09:04:51.109152] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.050 [2024-11-06 09:04:51.118434] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.050 [2024-11-06 09:04:51.118844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.050 [2024-11-06 09:04:51.118907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.050 [2024-11-06 09:04:51.118924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.050 [2024-11-06 09:04:51.119183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.050 [2024-11-06 09:04:51.119373] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.050 [2024-11-06 09:04:51.119393] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.050 [2024-11-06 09:04:51.119407] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.050 [2024-11-06 09:04:51.122304] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.050 [2024-11-06 09:04:51.131639] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.050 [2024-11-06 09:04:51.132013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.050 [2024-11-06 09:04:51.132054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.050 [2024-11-06 09:04:51.132071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.050 [2024-11-06 09:04:51.132321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.050 [2024-11-06 09:04:51.132527] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.050 [2024-11-06 09:04:51.132546] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.050 [2024-11-06 09:04:51.132560] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.050 [2024-11-06 09:04:51.135448] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.050 [2024-11-06 09:04:51.145035] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.050 [2024-11-06 09:04:51.145477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.050 [2024-11-06 09:04:51.145505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.050 [2024-11-06 09:04:51.145532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.050 [2024-11-06 09:04:51.145770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.050 [2024-11-06 09:04:51.146018] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.050 [2024-11-06 09:04:51.146041] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.050 [2024-11-06 09:04:51.146055] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.050 [2024-11-06 09:04:51.149181] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.050 [2024-11-06 09:04:51.158369] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.051 [2024-11-06 09:04:51.158808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.051 [2024-11-06 09:04:51.158879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.051 [2024-11-06 09:04:51.158899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.051 [2024-11-06 09:04:51.159130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.051 [2024-11-06 09:04:51.159341] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.051 [2024-11-06 09:04:51.159360] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.051 [2024-11-06 09:04:51.159373] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.051 [2024-11-06 09:04:51.162436] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.051 [2024-11-06 09:04:51.171701] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.051 [2024-11-06 09:04:51.172146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.051 [2024-11-06 09:04:51.172175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.051 [2024-11-06 09:04:51.172207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.051 [2024-11-06 09:04:51.172439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.051 [2024-11-06 09:04:51.172629] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.051 [2024-11-06 09:04:51.172648] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.051 [2024-11-06 09:04:51.172661] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.051 [2024-11-06 09:04:51.175669] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.051 [2024-11-06 09:04:51.185027] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.051 [2024-11-06 09:04:51.185468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.051 [2024-11-06 09:04:51.185496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.051 [2024-11-06 09:04:51.185522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.051 [2024-11-06 09:04:51.185756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.051 [2024-11-06 09:04:51.185982] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.051 [2024-11-06 09:04:51.186004] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.051 [2024-11-06 09:04:51.186018] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.051 [2024-11-06 09:04:51.188982] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.051 [2024-11-06 09:04:51.198209] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.051 [2024-11-06 09:04:51.198667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.051 [2024-11-06 09:04:51.198717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.051 [2024-11-06 09:04:51.198734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.051 [2024-11-06 09:04:51.198998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.051 [2024-11-06 09:04:51.199226] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.051 [2024-11-06 09:04:51.199245] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.051 [2024-11-06 09:04:51.199258] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.051 [2024-11-06 09:04:51.202205] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.051 [2024-11-06 09:04:51.211319] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.051 [2024-11-06 09:04:51.211772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.051 [2024-11-06 09:04:51.211839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.051 [2024-11-06 09:04:51.211857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.051 [2024-11-06 09:04:51.212105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.051 [2024-11-06 09:04:51.212295] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.051 [2024-11-06 09:04:51.212315] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.051 [2024-11-06 09:04:51.212328] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.051 [2024-11-06 09:04:51.215188] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.051 [2024-11-06 09:04:51.224320] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.051 [2024-11-06 09:04:51.224629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.051 [2024-11-06 09:04:51.224656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.051 [2024-11-06 09:04:51.224671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.051 [2024-11-06 09:04:51.224893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.051 [2024-11-06 09:04:51.225090] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.051 [2024-11-06 09:04:51.225124] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.051 [2024-11-06 09:04:51.225137] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.051 [2024-11-06 09:04:51.228006] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.051 [2024-11-06 09:04:51.237524] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.051 [2024-11-06 09:04:51.237939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.051 [2024-11-06 09:04:51.237968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.051 [2024-11-06 09:04:51.237993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.051 [2024-11-06 09:04:51.238230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.051 [2024-11-06 09:04:51.238434] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.051 [2024-11-06 09:04:51.238453] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.051 [2024-11-06 09:04:51.238471] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.051 [2024-11-06 09:04:51.241361] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.051 [2024-11-06 09:04:51.250681] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.051 [2024-11-06 09:04:51.251015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.051 [2024-11-06 09:04:51.251042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.051 [2024-11-06 09:04:51.251058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.051 [2024-11-06 09:04:51.251278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.051 [2024-11-06 09:04:51.251484] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.051 [2024-11-06 09:04:51.251504] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.051 [2024-11-06 09:04:51.251533] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.051 [2024-11-06 09:04:51.254695] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.051 [2024-11-06 09:04:51.263821] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.051 [2024-11-06 09:04:51.264204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.051 [2024-11-06 09:04:51.264232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.051 [2024-11-06 09:04:51.264248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.051 [2024-11-06 09:04:51.264489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.052 [2024-11-06 09:04:51.264693] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.052 [2024-11-06 09:04:51.264713] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.052 [2024-11-06 09:04:51.264726] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.052 [2024-11-06 09:04:51.267652] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.052 [2024-11-06 09:04:51.277182] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.052 [2024-11-06 09:04:51.277529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.052 [2024-11-06 09:04:51.277558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.052 [2024-11-06 09:04:51.277574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.052 [2024-11-06 09:04:51.277811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.052 [2024-11-06 09:04:51.278016] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.052 [2024-11-06 09:04:51.278037] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.052 [2024-11-06 09:04:51.278051] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.052 [2024-11-06 09:04:51.280836] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.052 [2024-11-06 09:04:51.290294] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.052 [2024-11-06 09:04:51.290588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.052 [2024-11-06 09:04:51.290630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.052 [2024-11-06 09:04:51.290645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.052 [2024-11-06 09:04:51.290875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.052 [2024-11-06 09:04:51.291077] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.052 [2024-11-06 09:04:51.291098] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.052 [2024-11-06 09:04:51.291112] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.052 [2024-11-06 09:04:51.294018] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.052 [2024-11-06 09:04:51.303527] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.052 [2024-11-06 09:04:51.303946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.052 [2024-11-06 09:04:51.303976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.052 [2024-11-06 09:04:51.303993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.052 [2024-11-06 09:04:51.304236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.052 [2024-11-06 09:04:51.304441] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.052 [2024-11-06 09:04:51.304461] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.052 [2024-11-06 09:04:51.304474] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.052 [2024-11-06 09:04:51.307396] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.052 [2024-11-06 09:04:51.316531] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.052 [2024-11-06 09:04:51.316875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.052 [2024-11-06 09:04:51.316903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.052 [2024-11-06 09:04:51.316920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.052 [2024-11-06 09:04:51.317156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.052 [2024-11-06 09:04:51.317362] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.052 [2024-11-06 09:04:51.317381] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.052 [2024-11-06 09:04:51.317394] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.052 [2024-11-06 09:04:51.320283] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.052 [2024-11-06 09:04:51.329690] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.052 [2024-11-06 09:04:51.330011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.052 [2024-11-06 09:04:51.330038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.052 [2024-11-06 09:04:51.330058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.052 [2024-11-06 09:04:51.330280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.052 [2024-11-06 09:04:51.330487] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.052 [2024-11-06 09:04:51.330506] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.052 [2024-11-06 09:04:51.330519] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.052 [2024-11-06 09:04:51.333412] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.311 [2024-11-06 09:04:51.342806] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.311 [2024-11-06 09:04:51.343209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.311 [2024-11-06 09:04:51.343246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.311 [2024-11-06 09:04:51.343262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.311 [2024-11-06 09:04:51.343496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.311 [2024-11-06 09:04:51.343740] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.311 [2024-11-06 09:04:51.343761] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.311 [2024-11-06 09:04:51.343775] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.311 [2024-11-06 09:04:51.346829] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.311 [2024-11-06 09:04:51.355893] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.311 [2024-11-06 09:04:51.356234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.311 [2024-11-06 09:04:51.356262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.311 [2024-11-06 09:04:51.356278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.311 [2024-11-06 09:04:51.356514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.311 [2024-11-06 09:04:51.356719] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.311 [2024-11-06 09:04:51.356738] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.311 [2024-11-06 09:04:51.356752] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.311 [2024-11-06 09:04:51.359524] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.311 [2024-11-06 09:04:51.369097] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.311 [2024-11-06 09:04:51.369495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.311 [2024-11-06 09:04:51.369523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.311 [2024-11-06 09:04:51.369539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.311 [2024-11-06 09:04:51.369757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.311 [2024-11-06 09:04:51.369999] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.311 [2024-11-06 09:04:51.370021] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.311 [2024-11-06 09:04:51.370034] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.311 [2024-11-06 09:04:51.373006] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.311 [2024-11-06 09:04:51.382379] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.311 [2024-11-06 09:04:51.382773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.311 [2024-11-06 09:04:51.382827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.311 [2024-11-06 09:04:51.382879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.311 [2024-11-06 09:04:51.383126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.311 [2024-11-06 09:04:51.383331] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.311 [2024-11-06 09:04:51.383351] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.311 [2024-11-06 09:04:51.383364] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.311 [2024-11-06 09:04:51.386251] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.311 [2024-11-06 09:04:51.395387] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.311 [2024-11-06 09:04:51.395730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.311 [2024-11-06 09:04:51.395758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.311 [2024-11-06 09:04:51.395775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.311 [2024-11-06 09:04:51.396020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.311 [2024-11-06 09:04:51.396228] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.311 [2024-11-06 09:04:51.396248] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.311 [2024-11-06 09:04:51.396261] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.311 [2024-11-06 09:04:51.399148] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.311 [2024-11-06 09:04:51.408638] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.311 [2024-11-06 09:04:51.409006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.311 [2024-11-06 09:04:51.409046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.311 [2024-11-06 09:04:51.409062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.311 [2024-11-06 09:04:51.409313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.311 [2024-11-06 09:04:51.409519] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.311 [2024-11-06 09:04:51.409538] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.311 [2024-11-06 09:04:51.409555] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.311 [2024-11-06 09:04:51.412458] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.311 [2024-11-06 09:04:51.421669] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.422027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.422055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.422075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.422311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.422517] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.422536] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.422549] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.312 [2024-11-06 09:04:51.425325] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.312 [2024-11-06 09:04:51.434840] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.435211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.435240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.435256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.435480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.435687] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.435706] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.435719] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.312 [2024-11-06 09:04:51.438679] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.312 [2024-11-06 09:04:51.448029] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.448452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.448484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.448501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.448737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.448980] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.449002] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.449016] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.312 [2024-11-06 09:04:51.451872] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.312 [2024-11-06 09:04:51.461003] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.461314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.461341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.461357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.461575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.461781] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.461801] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.461829] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.312 [2024-11-06 09:04:51.464743] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.312 [2024-11-06 09:04:51.474126] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.474469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.474497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.474513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.474749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.474986] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.475007] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.475022] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.312 [2024-11-06 09:04:51.477887] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.312 [2024-11-06 09:04:51.487213] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.487559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.487587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.487604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.487852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.488056] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.488077] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.488091] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.312 [2024-11-06 09:04:51.490989] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.312 [2024-11-06 09:04:51.500322] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.500662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.500690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.500711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.500980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.501218] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.501238] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.501250] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.312 [2024-11-06 09:04:51.504330] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.312 [2024-11-06 09:04:51.513424] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.513837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.513881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.513897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.514137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.514342] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.514361] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.514374] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.312 [2024-11-06 09:04:51.517265] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.312 [2024-11-06 09:04:51.526586] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.526991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.527021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.527037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.527278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.527468] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.527487] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.527500] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.312 [2024-11-06 09:04:51.530323] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.312 [2024-11-06 09:04:51.539636] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.312 [2024-11-06 09:04:51.539937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.312 [2024-11-06 09:04:51.539978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.312 [2024-11-06 09:04:51.539995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.312 [2024-11-06 09:04:51.540215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.312 [2024-11-06 09:04:51.540428] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.312 [2024-11-06 09:04:51.540448] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.312 [2024-11-06 09:04:51.540461] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.313 [2024-11-06 09:04:51.543279] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.313 [2024-11-06 09:04:51.552734] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.313 [2024-11-06 09:04:51.553116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.313 [2024-11-06 09:04:51.553154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.313 [2024-11-06 09:04:51.553170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.313 [2024-11-06 09:04:51.553388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.313 [2024-11-06 09:04:51.553594] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.313 [2024-11-06 09:04:51.553614] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.313 [2024-11-06 09:04:51.553627] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.313 [2024-11-06 09:04:51.556515] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.313 [2024-11-06 09:04:51.565863] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.313 [2024-11-06 09:04:51.566217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.313 [2024-11-06 09:04:51.566245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.313 [2024-11-06 09:04:51.566260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.313 [2024-11-06 09:04:51.566497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.313 [2024-11-06 09:04:51.566703] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.313 [2024-11-06 09:04:51.566723] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.313 [2024-11-06 09:04:51.566736] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.313 [2024-11-06 09:04:51.569661] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.313 [2024-11-06 09:04:51.579023] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.313 [2024-11-06 09:04:51.579386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.313 [2024-11-06 09:04:51.579414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.313 [2024-11-06 09:04:51.579432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.313 [2024-11-06 09:04:51.579669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.313 [2024-11-06 09:04:51.579901] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.313 [2024-11-06 09:04:51.579922] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.313 [2024-11-06 09:04:51.579940] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.313 [2024-11-06 09:04:51.582875] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.313 4477.20 IOPS, 17.49 MiB/s [2024-11-06T08:04:51.602Z] [2024-11-06 09:04:51.592112] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.313 [2024-11-06 09:04:51.592424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.313 [2024-11-06 09:04:51.592451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.313 [2024-11-06 09:04:51.592467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.313 [2024-11-06 09:04:51.592684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.313 [2024-11-06 09:04:51.592935] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.313 [2024-11-06 09:04:51.592957] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.313 [2024-11-06 09:04:51.592971] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.313 [2024-11-06 09:04:51.595936] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.572 [2024-11-06 09:04:51.605517] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.572 [2024-11-06 09:04:51.606018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-06 09:04:51.606046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-06 09:04:51.606062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.572 [2024-11-06 09:04:51.606308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.572 [2024-11-06 09:04:51.606513] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.572 [2024-11-06 09:04:51.606533] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.572 [2024-11-06 09:04:51.606547] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.572 [2024-11-06 09:04:51.609434] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.572 [2024-11-06 09:04:51.618604] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.572 [2024-11-06 09:04:51.618897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-06 09:04:51.618939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-06 09:04:51.618955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.572 [2024-11-06 09:04:51.619172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.572 [2024-11-06 09:04:51.619378] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.572 [2024-11-06 09:04:51.619398] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.572 [2024-11-06 09:04:51.619411] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.572 [2024-11-06 09:04:51.622184] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.572 [2024-11-06 09:04:51.631686] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.572 [2024-11-06 09:04:51.632035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-06 09:04:51.632062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-06 09:04:51.632078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.572 [2024-11-06 09:04:51.632294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.572 [2024-11-06 09:04:51.632500] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.572 [2024-11-06 09:04:51.632519] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.572 [2024-11-06 09:04:51.632532] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.572 [2024-11-06 09:04:51.635342] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.572 [2024-11-06 09:04:51.644795] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.572 [2024-11-06 09:04:51.645093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-06 09:04:51.645134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-06 09:04:51.645150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.572 [2024-11-06 09:04:51.645368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.572 [2024-11-06 09:04:51.645574] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.572 [2024-11-06 09:04:51.645593] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.572 [2024-11-06 09:04:51.645607] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.572 [2024-11-06 09:04:51.648380] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.572 [2024-11-06 09:04:51.657840] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.572 [2024-11-06 09:04:51.658253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-06 09:04:51.658280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-06 09:04:51.658301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.572 [2024-11-06 09:04:51.658537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.572 [2024-11-06 09:04:51.658743] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.572 [2024-11-06 09:04:51.658763] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.572 [2024-11-06 09:04:51.658776] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.572 [2024-11-06 09:04:51.661662] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.572 [2024-11-06 09:04:51.671016] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.572 [2024-11-06 09:04:51.671379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-06 09:04:51.671407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-06 09:04:51.671429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.572 [2024-11-06 09:04:51.671666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.572 [2024-11-06 09:04:51.671899] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.572 [2024-11-06 09:04:51.671920] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.572 [2024-11-06 09:04:51.671934] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.572 [2024-11-06 09:04:51.674769] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.572 [2024-11-06 09:04:51.684136] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.572 [2024-11-06 09:04:51.684479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-06 09:04:51.684508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-06 09:04:51.684524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.572 [2024-11-06 09:04:51.684760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.572 [2024-11-06 09:04:51.684998] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.572 [2024-11-06 09:04:51.685019] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.572 [2024-11-06 09:04:51.685033] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.572 [2024-11-06 09:04:51.687915] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.572 [2024-11-06 09:04:51.697253] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.572 [2024-11-06 09:04:51.697597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.572 [2024-11-06 09:04:51.697626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.572 [2024-11-06 09:04:51.697642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.572 [2024-11-06 09:04:51.697889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.572 [2024-11-06 09:04:51.698100] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.572 [2024-11-06 09:04:51.698120] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.572 [2024-11-06 09:04:51.698148] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.573 [2024-11-06 09:04:51.701022] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.573 [2024-11-06 09:04:51.710402] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.573 [2024-11-06 09:04:51.710816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-11-06 09:04:51.710855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.573 [2024-11-06 09:04:51.710871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.573 [2024-11-06 09:04:51.711111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.573 [2024-11-06 09:04:51.711322] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.573 [2024-11-06 09:04:51.711341] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.573 [2024-11-06 09:04:51.711354] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.573 [2024-11-06 09:04:51.714136] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.573 [2024-11-06 09:04:51.723469] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.573 [2024-11-06 09:04:51.723886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-11-06 09:04:51.723914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.573 [2024-11-06 09:04:51.723931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.573 [2024-11-06 09:04:51.724169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.573 [2024-11-06 09:04:51.724375] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.573 [2024-11-06 09:04:51.724394] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.573 [2024-11-06 09:04:51.724408] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.573 [2024-11-06 09:04:51.727298] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.573 [2024-11-06 09:04:51.736618] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.573 [2024-11-06 09:04:51.736987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-11-06 09:04:51.737023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.573 [2024-11-06 09:04:51.737039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.573 [2024-11-06 09:04:51.737286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.573 [2024-11-06 09:04:51.737491] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.573 [2024-11-06 09:04:51.737511] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.573 [2024-11-06 09:04:51.737524] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.573 [2024-11-06 09:04:51.740375] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.573 [2024-11-06 09:04:51.749674] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.573 [2024-11-06 09:04:51.750095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-11-06 09:04:51.750123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.573 [2024-11-06 09:04:51.750139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.573 [2024-11-06 09:04:51.750373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.573 [2024-11-06 09:04:51.750563] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.573 [2024-11-06 09:04:51.750583] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.573 [2024-11-06 09:04:51.750600] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.573 [2024-11-06 09:04:51.753487] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.573 [2024-11-06 09:04:51.763048] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.573 [2024-11-06 09:04:51.763470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-11-06 09:04:51.763501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.573 [2024-11-06 09:04:51.763517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.573 [2024-11-06 09:04:51.763753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.573 [2024-11-06 09:04:51.763985] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.573 [2024-11-06 09:04:51.764007] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.573 [2024-11-06 09:04:51.764021] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.573 [2024-11-06 09:04:51.766908] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.573 [2024-11-06 09:04:51.776264] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.573 [2024-11-06 09:04:51.776608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-11-06 09:04:51.776636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.573 [2024-11-06 09:04:51.776652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.573 [2024-11-06 09:04:51.776901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.573 [2024-11-06 09:04:51.777113] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.573 [2024-11-06 09:04:51.777133] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.573 [2024-11-06 09:04:51.777172] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.573 [2024-11-06 09:04:51.780058] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.573 [2024-11-06 09:04:51.789453] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.573 [2024-11-06 09:04:51.789750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-11-06 09:04:51.789778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.573 [2024-11-06 09:04:51.789794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.573 [2024-11-06 09:04:51.790041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.573 [2024-11-06 09:04:51.790267] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.573 [2024-11-06 09:04:51.790287] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.573 [2024-11-06 09:04:51.790301] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.573 [2024-11-06 09:04:51.793197] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.573 [2024-11-06 09:04:51.802705] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.573 [2024-11-06 09:04:51.803107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-11-06 09:04:51.803136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.573 [2024-11-06 09:04:51.803153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.573 [2024-11-06 09:04:51.803388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.573 [2024-11-06 09:04:51.803596] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.573 [2024-11-06 09:04:51.803617] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.573 [2024-11-06 09:04:51.803630] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.573 [2024-11-06 09:04:51.806525] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.573 [2024-11-06 09:04:51.815807] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.573 [2024-11-06 09:04:51.816132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.573 [2024-11-06 09:04:51.816161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.573 [2024-11-06 09:04:51.816177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.573 [2024-11-06 09:04:51.816396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.574 [2024-11-06 09:04:51.816604] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.574 [2024-11-06 09:04:51.816624] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.574 [2024-11-06 09:04:51.816637] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.574 [2024-11-06 09:04:51.819491] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.574 [2024-11-06 09:04:51.829010] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.574 [2024-11-06 09:04:51.829372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-11-06 09:04:51.829401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.574 [2024-11-06 09:04:51.829417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.574 [2024-11-06 09:04:51.829654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.574 [2024-11-06 09:04:51.829887] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.574 [2024-11-06 09:04:51.829909] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.574 [2024-11-06 09:04:51.829924] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.574 [2024-11-06 09:04:51.832790] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.574 [2024-11-06 09:04:51.842136] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.574 [2024-11-06 09:04:51.842540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-11-06 09:04:51.842569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.574 [2024-11-06 09:04:51.842592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.574 [2024-11-06 09:04:51.842829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.574 [2024-11-06 09:04:51.843048] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.574 [2024-11-06 09:04:51.843068] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.574 [2024-11-06 09:04:51.843081] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.574 [2024-11-06 09:04:51.845854] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.574 [2024-11-06 09:04:51.855193] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.574 [2024-11-06 09:04:51.855538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.574 [2024-11-06 09:04:51.855566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.574 [2024-11-06 09:04:51.855583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.574 [2024-11-06 09:04:51.855820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.574 [2024-11-06 09:04:51.856037] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.574 [2024-11-06 09:04:51.856058] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.574 [2024-11-06 09:04:51.856072] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.574 [2024-11-06 09:04:51.859402] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.833 [2024-11-06 09:04:51.868609] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.833 [2024-11-06 09:04:51.868985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.833 [2024-11-06 09:04:51.869015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.833 [2024-11-06 09:04:51.869032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.833 [2024-11-06 09:04:51.869256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.833 [2024-11-06 09:04:51.869461] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.833 [2024-11-06 09:04:51.869481] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.833 [2024-11-06 09:04:51.869495] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.833 [2024-11-06 09:04:51.872426] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.833 [2024-11-06 09:04:51.881615] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.833 [2024-11-06 09:04:51.882034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.833 [2024-11-06 09:04:51.882063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.833 [2024-11-06 09:04:51.882080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.833 [2024-11-06 09:04:51.882316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.833 [2024-11-06 09:04:51.882526] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.833 [2024-11-06 09:04:51.882547] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.833 [2024-11-06 09:04:51.882560] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.833 [2024-11-06 09:04:51.885456] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.833 [2024-11-06 09:04:51.894783] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.833 [2024-11-06 09:04:51.895154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.833 [2024-11-06 09:04:51.895183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.833 [2024-11-06 09:04:51.895199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.833 [2024-11-06 09:04:51.895436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.833 [2024-11-06 09:04:51.895641] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.833 [2024-11-06 09:04:51.895662] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.833 [2024-11-06 09:04:51.895675] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.833 [2024-11-06 09:04:51.898566] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.833 [2024-11-06 09:04:51.907815] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.833 [2024-11-06 09:04:51.908135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.833 [2024-11-06 09:04:51.908163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.833 [2024-11-06 09:04:51.908179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.833 [2024-11-06 09:04:51.908396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.833 [2024-11-06 09:04:51.908602] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.833 [2024-11-06 09:04:51.908623] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.833 [2024-11-06 09:04:51.908636] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.833 [2024-11-06 09:04:51.911529] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.833 [2024-11-06 09:04:51.921041] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.833 [2024-11-06 09:04:51.921378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.833 [2024-11-06 09:04:51.921406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.833 [2024-11-06 09:04:51.921422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.833 [2024-11-06 09:04:51.921641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.833 [2024-11-06 09:04:51.921876] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.833 [2024-11-06 09:04:51.921898] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.833 [2024-11-06 09:04:51.921917] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.833 [2024-11-06 09:04:51.924743] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.833 [2024-11-06 09:04:51.934239] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:51.934582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:51.934609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:51.934624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:51.934851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:51.935047] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:51.935068] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:51.935082] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:51.937970] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:51.947312] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:51.947772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:51.947823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:51.947850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:51.948092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:51.948282] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:51.948303] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:51.948317] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:51.951089] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:51.960389] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:51.960780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:51.960843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:51.960862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:51.961107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:51.961297] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:51.961317] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:51.961331] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:51.964105] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:51.973414] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:51.973732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:51.973759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:51.973775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:51.974038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:51.974247] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:51.974268] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:51.974281] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:51.977183] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:51.986587] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:51.986958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:51.986988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:51.987004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:51.987228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:51.987434] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:51.987454] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:51.987467] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:51.990361] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:51.999610] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:51.999990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:52.000019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:52.000034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:52.000253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:52.000458] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:52.000479] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:52.000492] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:52.003388] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:52.012717] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:52.013158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:52.013187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:52.013209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:52.013445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:52.013650] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:52.013671] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:52.013685] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:52.016693] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:52.026159] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:52.026537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:52.026567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:52.026583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:52.026813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:52.027054] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:52.027076] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:52.027090] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:52.030391] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:52.039747] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:52.040076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:52.040105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:52.040123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:52.040354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:52.040565] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:52.040586] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:52.040600] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:52.043866] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:52.053318] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.834 [2024-11-06 09:04:52.053709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.834 [2024-11-06 09:04:52.053759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.834 [2024-11-06 09:04:52.053777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.834 [2024-11-06 09:04:52.054005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.834 [2024-11-06 09:04:52.054267] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.834 [2024-11-06 09:04:52.054289] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.834 [2024-11-06 09:04:52.054303] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.834 [2024-11-06 09:04:52.057599] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.834 [2024-11-06 09:04:52.067036] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.835 [2024-11-06 09:04:52.067450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.835 [2024-11-06 09:04:52.067499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.835 [2024-11-06 09:04:52.067516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.835 [2024-11-06 09:04:52.067738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.835 [2024-11-06 09:04:52.067982] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.835 [2024-11-06 09:04:52.068007] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.835 [2024-11-06 09:04:52.068022] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.835 [2024-11-06 09:04:52.071416] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.835 [2024-11-06 09:04:52.080682] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.835 [2024-11-06 09:04:52.081013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.835 [2024-11-06 09:04:52.081043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.835 [2024-11-06 09:04:52.081060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.835 [2024-11-06 09:04:52.081294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.835 [2024-11-06 09:04:52.081511] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.835 [2024-11-06 09:04:52.081533] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.835 [2024-11-06 09:04:52.081546] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.835 [2024-11-06 09:04:52.084879] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.835 [2024-11-06 09:04:52.094356] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.835 [2024-11-06 09:04:52.094709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.835 [2024-11-06 09:04:52.094739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.835 [2024-11-06 09:04:52.094755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.835 [2024-11-06 09:04:52.094997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.835 [2024-11-06 09:04:52.095228] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.835 [2024-11-06 09:04:52.095250] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.835 [2024-11-06 09:04:52.095270] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.835 [2024-11-06 09:04:52.098407] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.835 [2024-11-06 09:04:52.107738] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:38.835 [2024-11-06 09:04:52.108063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.835 [2024-11-06 09:04:52.108092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:38.835 [2024-11-06 09:04:52.108110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:38.835 [2024-11-06 09:04:52.108348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:38.835 [2024-11-06 09:04:52.108553] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:38.835 [2024-11-06 09:04:52.108574] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:38.835 [2024-11-06 09:04:52.108587] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:38.835 [2024-11-06 09:04:52.111621] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:38.835 [2024-11-06 09:04:52.121446] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.096 [2024-11-06 09:04:52.121898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.096 [2024-11-06 09:04:52.121928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.096 [2024-11-06 09:04:52.121944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.096 [2024-11-06 09:04:52.122175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.096 [2024-11-06 09:04:52.122393] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.096 [2024-11-06 09:04:52.122414] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.097 [2024-11-06 09:04:52.122427] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.097 [2024-11-06 09:04:52.125537] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.097 [2024-11-06 09:04:52.134764] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.097 [2024-11-06 09:04:52.135198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.097 [2024-11-06 09:04:52.135226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.097 [2024-11-06 09:04:52.135242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.097 [2024-11-06 09:04:52.135479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.097 [2024-11-06 09:04:52.135684] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.097 [2024-11-06 09:04:52.135704] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.097 [2024-11-06 09:04:52.135717] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.097 [2024-11-06 09:04:52.138615] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.097 [2024-11-06 09:04:52.148089] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.097 [2024-11-06 09:04:52.148477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.097 [2024-11-06 09:04:52.148506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.097 [2024-11-06 09:04:52.148522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.097 [2024-11-06 09:04:52.148760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.097 [2024-11-06 09:04:52.149013] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.097 [2024-11-06 09:04:52.149037] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.097 [2024-11-06 09:04:52.149051] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.097 [2024-11-06 09:04:52.152416] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.097 [2024-11-06 09:04:52.161458] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.097 [2024-11-06 09:04:52.161803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.097 [2024-11-06 09:04:52.161856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.097 [2024-11-06 09:04:52.161875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.097 [2024-11-06 09:04:52.162091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.097 [2024-11-06 09:04:52.162304] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.097 [2024-11-06 09:04:52.162325] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.097 [2024-11-06 09:04:52.162339] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.097 [2024-11-06 09:04:52.165368] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.097 [2024-11-06 09:04:52.174856] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.097 [2024-11-06 09:04:52.175252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.097 [2024-11-06 09:04:52.175293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.097 [2024-11-06 09:04:52.175308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.097 [2024-11-06 09:04:52.175528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.097 [2024-11-06 09:04:52.175734] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.097 [2024-11-06 09:04:52.175755] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.097 [2024-11-06 09:04:52.175769] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.097 [2024-11-06 09:04:52.178772] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.097 [2024-11-06 09:04:52.188124] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.097 [2024-11-06 09:04:52.188485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.097 [2024-11-06 09:04:52.188514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.097 [2024-11-06 09:04:52.188539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.097 [2024-11-06 09:04:52.188777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.097 [2024-11-06 09:04:52.189010] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.097 [2024-11-06 09:04:52.189031] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.097 [2024-11-06 09:04:52.189045] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.097 [2024-11-06 09:04:52.191939] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.097 [2024-11-06 09:04:52.201303] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.097 [2024-11-06 09:04:52.201613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.097 [2024-11-06 09:04:52.201651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.097 [2024-11-06 09:04:52.201685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.097 [2024-11-06 09:04:52.201929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.097 [2024-11-06 09:04:52.202155] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.097 [2024-11-06 09:04:52.202176] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.097 [2024-11-06 09:04:52.202189] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.097 [2024-11-06 09:04:52.205066] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.097 [2024-11-06 09:04:52.214469] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.097 [2024-11-06 09:04:52.214781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.097 [2024-11-06 09:04:52.214808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.097 [2024-11-06 09:04:52.214824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.097 [2024-11-06 09:04:52.215095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.097 [2024-11-06 09:04:52.215302] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.097 [2024-11-06 09:04:52.215321] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.097 [2024-11-06 09:04:52.215335] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 934881 Killed "${NVMF_APP[@]}" "$@" 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.097 [2024-11-06 09:04:52.218487] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=935916 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 935916 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 935916 ']' 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:39.097 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.097 [2024-11-06 09:04:52.227922] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.097 [2024-11-06 09:04:52.228373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.098 [2024-11-06 09:04:52.228401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.098 [2024-11-06 09:04:52.228417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.098 [2024-11-06 09:04:52.228654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.098 [2024-11-06 09:04:52.228898] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.098 [2024-11-06 09:04:52.228920] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.098 [2024-11-06 09:04:52.228936] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.098 [2024-11-06 09:04:52.232064] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.098 [2024-11-06 09:04:52.241432] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.098 [2024-11-06 09:04:52.241854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.098 [2024-11-06 09:04:52.241884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.098 [2024-11-06 09:04:52.241901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.098 [2024-11-06 09:04:52.242133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.098 [2024-11-06 09:04:52.242363] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.098 [2024-11-06 09:04:52.242383] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.098 [2024-11-06 09:04:52.242397] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.098 [2024-11-06 09:04:52.245456] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.098 [2024-11-06 09:04:52.254836] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.098 [2024-11-06 09:04:52.255169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.098 [2024-11-06 09:04:52.255198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.098 [2024-11-06 09:04:52.255214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.098 [2024-11-06 09:04:52.255445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.098 [2024-11-06 09:04:52.255657] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.098 [2024-11-06 09:04:52.255678] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.098 [2024-11-06 09:04:52.255691] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.098 [2024-11-06 09:04:52.258625] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.098 [2024-11-06 09:04:52.265740] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:39.098 [2024-11-06 09:04:52.265797] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.098 [2024-11-06 09:04:52.268198] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.098 [2024-11-06 09:04:52.268635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.098 [2024-11-06 09:04:52.268664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.098 [2024-11-06 09:04:52.268680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.098 [2024-11-06 09:04:52.268935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.098 [2024-11-06 09:04:52.269144] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.098 [2024-11-06 09:04:52.269180] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.098 [2024-11-06 09:04:52.269195] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.098 [2024-11-06 09:04:52.272383] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.098 [2024-11-06 09:04:52.281479] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.098 [2024-11-06 09:04:52.281845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.098 [2024-11-06 09:04:52.281891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.098 [2024-11-06 09:04:52.281908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.098 [2024-11-06 09:04:52.282168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.098 [2024-11-06 09:04:52.282364] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.098 [2024-11-06 09:04:52.282384] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.098 [2024-11-06 09:04:52.282398] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.098 [2024-11-06 09:04:52.285371] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.098 [2024-11-06 09:04:52.295004] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.098 [2024-11-06 09:04:52.295393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.098 [2024-11-06 09:04:52.295432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.098 [2024-11-06 09:04:52.295448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.098 [2024-11-06 09:04:52.295691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.098 [2024-11-06 09:04:52.295947] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.098 [2024-11-06 09:04:52.295970] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.098 [2024-11-06 09:04:52.295984] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.098 [2024-11-06 09:04:52.298975] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.098 [2024-11-06 09:04:52.308449] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.098 [2024-11-06 09:04:52.308804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.098 [2024-11-06 09:04:52.308856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.098 [2024-11-06 09:04:52.308874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.098 [2024-11-06 09:04:52.309105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.098 [2024-11-06 09:04:52.309343] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.098 [2024-11-06 09:04:52.309363] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.098 [2024-11-06 09:04:52.309376] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.098 [2024-11-06 09:04:52.312469] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.098 [2024-11-06 09:04:52.321657] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.098 [2024-11-06 09:04:52.322035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.098 [2024-11-06 09:04:52.322075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.098 [2024-11-06 09:04:52.322091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.098 [2024-11-06 09:04:52.322332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.098 [2024-11-06 09:04:52.322543] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.098 [2024-11-06 09:04:52.322563] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.098 [2024-11-06 09:04:52.322576] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.099 [2024-11-06 09:04:52.325555] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.099 [2024-11-06 09:04:52.335025] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.099 [2024-11-06 09:04:52.335418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.099 [2024-11-06 09:04:52.335457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.099 [2024-11-06 09:04:52.335474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.099 [2024-11-06 09:04:52.335717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.099 [2024-11-06 09:04:52.335955] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.099 [2024-11-06 09:04:52.335981] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.099 [2024-11-06 09:04:52.335995] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.099 [2024-11-06 09:04:52.338626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:39.099 [2024-11-06 09:04:52.338968] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.099 [2024-11-06 09:04:52.348374] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.099 [2024-11-06 09:04:52.348964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.099 [2024-11-06 09:04:52.349013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.099 [2024-11-06 09:04:52.349032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.099 [2024-11-06 09:04:52.349294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.099 [2024-11-06 09:04:52.349494] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.099 [2024-11-06 09:04:52.349515] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.099 [2024-11-06 09:04:52.349530] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.099 [2024-11-06 09:04:52.352525] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.099 [2024-11-06 09:04:52.361698] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.099 [2024-11-06 09:04:52.362258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.099 [2024-11-06 09:04:52.362300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.099 [2024-11-06 09:04:52.362319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.099 [2024-11-06 09:04:52.362592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.099 [2024-11-06 09:04:52.362789] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.099 [2024-11-06 09:04:52.362809] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.099 [2024-11-06 09:04:52.362830] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.099 [2024-11-06 09:04:52.365804] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.099 [2024-11-06 09:04:52.374945] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.099 [2024-11-06 09:04:52.375394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.099 [2024-11-06 09:04:52.375422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.099 [2024-11-06 09:04:52.375439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.099 [2024-11-06 09:04:52.375675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.099 [2024-11-06 09:04:52.375901] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.099 [2024-11-06 09:04:52.375923] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.099 [2024-11-06 09:04:52.375937] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.099 [2024-11-06 09:04:52.378971] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.359 [2024-11-06 09:04:52.388342] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.359 [2024-11-06 09:04:52.388767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.359 [2024-11-06 09:04:52.388797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.359 [2024-11-06 09:04:52.388820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.359 [2024-11-06 09:04:52.389103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.359 [2024-11-06 09:04:52.389334] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.359 [2024-11-06 09:04:52.389354] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.359 [2024-11-06 09:04:52.389368] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.360 [2024-11-06 09:04:52.392693] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.360 [2024-11-06 09:04:52.396677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.360 [2024-11-06 09:04:52.396709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.360 [2024-11-06 09:04:52.396729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.360 [2024-11-06 09:04:52.396741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.360 [2024-11-06 09:04:52.396751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.360 [2024-11-06 09:04:52.398104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:39.360 [2024-11-06 09:04:52.398172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:39.360 [2024-11-06 09:04:52.398176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.360 [2024-11-06 09:04:52.401921] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.360 [2024-11-06 09:04:52.402377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.360 [2024-11-06 09:04:52.402409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.360 [2024-11-06 09:04:52.402431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.360 [2024-11-06 09:04:52.402667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.360 [2024-11-06 09:04:52.402921] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.360 [2024-11-06 09:04:52.402944] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.360 [2024-11-06 09:04:52.402961] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.360 [2024-11-06 09:04:52.406131] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.360 [2024-11-06 09:04:52.415552] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.360 [2024-11-06 09:04:52.416071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.360 [2024-11-06 09:04:52.416120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.360 [2024-11-06 09:04:52.416139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.360 [2024-11-06 09:04:52.416417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.360 [2024-11-06 09:04:52.416629] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.360 [2024-11-06 09:04:52.416650] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.360 [2024-11-06 09:04:52.416666] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.360 [2024-11-06 09:04:52.419826] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.360 [2024-11-06 09:04:52.429029] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.360 [2024-11-06 09:04:52.429543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.360 [2024-11-06 09:04:52.429592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.360 [2024-11-06 09:04:52.429613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.360 [2024-11-06 09:04:52.429860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.360 [2024-11-06 09:04:52.430079] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.360 [2024-11-06 09:04:52.430101] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.360 [2024-11-06 09:04:52.430118] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.360 [2024-11-06 09:04:52.433297] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.360 [2024-11-06 09:04:52.442634] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.360 [2024-11-06 09:04:52.443179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.360 [2024-11-06 09:04:52.443229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.360 [2024-11-06 09:04:52.443249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.360 [2024-11-06 09:04:52.443502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.360 [2024-11-06 09:04:52.443713] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.360 [2024-11-06 09:04:52.443735] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.360 [2024-11-06 09:04:52.443751] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.360 [2024-11-06 09:04:52.446946] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.360 [2024-11-06 09:04:52.456332] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.360 [2024-11-06 09:04:52.456807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.360 [2024-11-06 09:04:52.456862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.360 [2024-11-06 09:04:52.456892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.360 [2024-11-06 09:04:52.457155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.360 [2024-11-06 09:04:52.457368] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.360 [2024-11-06 09:04:52.457398] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.360 [2024-11-06 09:04:52.457415] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.360 [2024-11-06 09:04:52.460571] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.360 [2024-11-06 09:04:52.469946] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.360 [2024-11-06 09:04:52.470477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.360 [2024-11-06 09:04:52.470528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.360 [2024-11-06 09:04:52.470548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.360 [2024-11-06 09:04:52.470800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.360 [2024-11-06 09:04:52.471045] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.360 [2024-11-06 09:04:52.471069] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.360 [2024-11-06 09:04:52.471086] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.360 [2024-11-06 09:04:52.474271] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.360 [2024-11-06 09:04:52.483424] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.360 [2024-11-06 09:04:52.483743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.360 [2024-11-06 09:04:52.483789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.360 [2024-11-06 09:04:52.483806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.360 [2024-11-06 09:04:52.484032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.360 [2024-11-06 09:04:52.484275] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.360 [2024-11-06 09:04:52.484297] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.360 [2024-11-06 09:04:52.484312] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.360 [2024-11-06 09:04:52.487468] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.360 [2024-11-06 09:04:52.497062] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.360 [2024-11-06 09:04:52.497431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.360 [2024-11-06 09:04:52.497461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.360 [2024-11-06 09:04:52.497478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.360 [2024-11-06 09:04:52.497710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.360 [2024-11-06 09:04:52.497956] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.360 [2024-11-06 09:04:52.497981] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.360 [2024-11-06 09:04:52.497996] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.360 [2024-11-06 09:04:52.501270] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.360 [2024-11-06 09:04:52.510693] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.360 [2024-11-06 09:04:52.511053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.360 [2024-11-06 09:04:52.511083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.360 [2024-11-06 09:04:52.511099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.360 [2024-11-06 09:04:52.511316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.360 [2024-11-06 09:04:52.511546] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.361 [2024-11-06 09:04:52.511569] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.361 [2024-11-06 09:04:52.511584] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.361 [2024-11-06 09:04:52.514923] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.361 [2024-11-06 09:04:52.524374] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.361 [2024-11-06 09:04:52.524697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.361 [2024-11-06 09:04:52.524727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.361 [2024-11-06 09:04:52.524744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.361 [2024-11-06 09:04:52.524971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.361 [2024-11-06 09:04:52.525221] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.361 [2024-11-06 09:04:52.525244] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.361 [2024-11-06 09:04:52.525258] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.361 [2024-11-06 09:04:52.528624] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.361 [2024-11-06 09:04:52.537808] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.361 [2024-11-06 09:04:52.538242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.361 [2024-11-06 09:04:52.538271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.361 [2024-11-06 09:04:52.538288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.361 [2024-11-06 09:04:52.538520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:39.361 [2024-11-06 09:04:52.538744] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.361 [2024-11-06 09:04:52.538772] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.361 [2024-11-06 09:04:52.538787] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.361 [2024-11-06 09:04:52.542061] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.361 [2024-11-06 09:04:52.543008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.361 [2024-11-06 09:04:52.551548] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.361 [2024-11-06 09:04:52.551876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.361 [2024-11-06 09:04:52.551916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.361 [2024-11-06 09:04:52.551933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.361 [2024-11-06 09:04:52.552165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.361 [2024-11-06 09:04:52.552390] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.361 [2024-11-06 09:04:52.552412] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.361 [2024-11-06 09:04:52.552427] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.361 [2024-11-06 09:04:52.555623] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.361 [2024-11-06 09:04:52.564969] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.361 [2024-11-06 09:04:52.565398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.361 [2024-11-06 09:04:52.565431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.361 [2024-11-06 09:04:52.565449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.361 [2024-11-06 09:04:52.565697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.361 [2024-11-06 09:04:52.565939] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.361 [2024-11-06 09:04:52.565962] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.361 [2024-11-06 09:04:52.565976] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.361 [2024-11-06 09:04:52.569103] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.361 [2024-11-06 09:04:52.578403] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.361 [2024-11-06 09:04:52.578791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.361 [2024-11-06 09:04:52.578823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.361 [2024-11-06 09:04:52.578860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.361 [2024-11-06 09:04:52.579082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.361 [2024-11-06 09:04:52.579324] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.361 [2024-11-06 09:04:52.579347] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.361 [2024-11-06 09:04:52.579362] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.361 [2024-11-06 09:04:52.582593] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.361 Malloc0 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.361 3731.00 IOPS, 14.57 MiB/s [2024-11-06T08:04:52.650Z] [2024-11-06 09:04:52.593480] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.361 [2024-11-06 09:04:52.593885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:39.361 [2024-11-06 09:04:52.593917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x747a40 with addr=10.0.0.2, port=4420 00:28:39.361 [2024-11-06 09:04:52.593933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x747a40 is same with the state(6) to be set 00:28:39.361 [2024-11-06 09:04:52.594152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x747a40 (9): Bad file descriptor 00:28:39.361 [2024-11-06 09:04:52.594372] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.361 [2024-11-06 09:04:52.594396] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:39.361 [2024-11-06 09:04:52.594427] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.361 [2024-11-06 09:04:52.597612] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.361 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.361 [2024-11-06 09:04:52.606374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.362 [2024-11-06 09:04:52.606998] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:39.362 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.362 09:04:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 935133 00:28:39.619 [2024-11-06 09:04:52.684445] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:41.485 4322.43 IOPS, 16.88 MiB/s [2024-11-06T08:04:55.707Z] 4855.12 IOPS, 18.97 MiB/s [2024-11-06T08:04:56.640Z] 5262.67 IOPS, 20.56 MiB/s [2024-11-06T08:04:58.013Z] 5597.90 IOPS, 21.87 MiB/s [2024-11-06T08:04:59.000Z] 5881.27 IOPS, 22.97 MiB/s [2024-11-06T08:04:59.932Z] 6108.83 IOPS, 23.86 MiB/s [2024-11-06T08:05:00.865Z] 6316.15 IOPS, 24.67 MiB/s [2024-11-06T08:05:01.797Z] 6468.64 IOPS, 25.27 MiB/s 00:28:48.508 Latency(us) 00:28:48.508 [2024-11-06T08:05:01.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.508 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:48.508 Verification LBA range: start 0x0 length 0x4000 00:28:48.508 Nvme1n1 : 15.00 6619.57 25.86 10039.41 0.00 7660.12 843.47 18155.90 00:28:48.508 [2024-11-06T08:05:01.797Z] =================================================================================================================== 00:28:48.508 [2024-11-06T08:05:01.797Z] Total : 6619.57 25.86 10039.41 0.00 7660.12 843.47 18155.90 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.766 rmmod nvme_tcp 00:28:48.766 rmmod nvme_fabrics 00:28:48.766 rmmod nvme_keyring 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 935916 ']' 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 935916 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 935916 ']' 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 935916 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 935916 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 935916' 00:28:48.766 killing process with pid 935916 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 935916 00:28:48.766 09:05:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 935916 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.025 09:05:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.562 00:28:51.562 real 0m22.429s 00:28:51.562 user 0m59.938s 00:28:51.562 sys 0m4.195s 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:51.562 ************************************ 00:28:51.562 END TEST nvmf_bdevperf 00:28:51.562 ************************************ 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.562 ************************************ 00:28:51.562 START TEST nvmf_target_disconnect 00:28:51.562 ************************************ 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:51.562 * Looking for test storage... 00:28:51.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # lcov --version 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.562 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:28:51.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.563 --rc genhtml_branch_coverage=1 00:28:51.563 --rc genhtml_function_coverage=1 00:28:51.563 --rc genhtml_legend=1 00:28:51.563 --rc geninfo_all_blocks=1 00:28:51.563 --rc geninfo_unexecuted_blocks=1 00:28:51.563 00:28:51.563 ' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:28:51.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.563 --rc genhtml_branch_coverage=1 00:28:51.563 --rc genhtml_function_coverage=1 00:28:51.563 --rc genhtml_legend=1 00:28:51.563 --rc geninfo_all_blocks=1 00:28:51.563 --rc geninfo_unexecuted_blocks=1 00:28:51.563 00:28:51.563 ' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:28:51.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.563 --rc genhtml_branch_coverage=1 00:28:51.563 --rc genhtml_function_coverage=1 00:28:51.563 --rc genhtml_legend=1 00:28:51.563 --rc geninfo_all_blocks=1 00:28:51.563 --rc geninfo_unexecuted_blocks=1 00:28:51.563 00:28:51.563 ' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:28:51.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.563 --rc genhtml_branch_coverage=1 00:28:51.563 --rc genhtml_function_coverage=1 00:28:51.563 --rc genhtml_legend=1 00:28:51.563 --rc geninfo_all_blocks=1 00:28:51.563 --rc geninfo_unexecuted_blocks=1 00:28:51.563 00:28:51.563 ' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:51.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.563 09:05:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:53.466 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:53.466 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:53.466 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:53.467 Found net devices under 0000:09:00.0: cvl_0_0 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:53.467 Found net devices under 0000:09:00.1: cvl_0_1 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:53.467 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:53.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:28:53.726 00:28:53.726 --- 10.0.0.2 ping statistics --- 00:28:53.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.726 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:28:53.726 00:28:53.726 --- 10.0.0.1 ping statistics --- 00:28:53.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.726 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:53.726 ************************************ 00:28:53.726 START TEST nvmf_target_disconnect_tc1 00:28:53.726 ************************************ 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:53.726 [2024-11-06 09:05:06.918399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:53.726 [2024-11-06 09:05:06.918470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124bf40 with addr=10.0.0.2, port=4420 00:28:53.726 [2024-11-06 09:05:06.918506] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:53.726 [2024-11-06 09:05:06.918526] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:53.726 [2024-11-06 09:05:06.918540] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:53.726 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:53.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:53.726 Initializing NVMe Controllers 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:53.726 00:28:53.726 real 0m0.099s 00:28:53.726 user 0m0.045s 00:28:53.726 sys 0m0.054s 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.726 ************************************ 00:28:53.726 END TEST nvmf_target_disconnect_tc1 00:28:53.726 ************************************ 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:53.726 ************************************ 00:28:53.726 START TEST nvmf_target_disconnect_tc2 00:28:53.726 ************************************ 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=939076 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 939076 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 939076 ']' 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:53.726 09:05:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.985 [2024-11-06 09:05:07.023050] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:53.985 [2024-11-06 09:05:07.023141] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.985 [2024-11-06 09:05:07.096458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:53.985 [2024-11-06 09:05:07.156739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.985 [2024-11-06 09:05:07.156795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.985 [2024-11-06 09:05:07.156838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.985 [2024-11-06 09:05:07.156852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.985 [2024-11-06 09:05:07.156862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.985 [2024-11-06 09:05:07.158423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:53.985 [2024-11-06 09:05:07.158484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:53.985 [2024-11-06 09:05:07.158550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:53.985 [2024-11-06 09:05:07.158553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:54.243 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:54.243 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:54.243 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:54.243 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.243 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.243 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.243 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:54.243 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.243 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.243 Malloc0 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.244 [2024-11-06 09:05:07.354724] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.244 [2024-11-06 09:05:07.383011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=939110 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:54.244 09:05:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:56.148 09:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 939076 00:28:56.148 09:05:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Write completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Write completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Write completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Write completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Write completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Write completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Write completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.148 starting I/O failed 00:28:56.148 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 [2024-11-06 09:05:09.408593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 [2024-11-06 09:05:09.408940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 [2024-11-06 09:05:09.409245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Write completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 Read completed with error (sct=0, sc=8) 00:28:56.149 starting I/O failed 00:28:56.149 [2024-11-06 09:05:09.409583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.149 [2024-11-06 09:05:09.409782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.149 [2024-11-06 09:05:09.409820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.149 qpair failed and we were unable to recover it. 00:28:56.149 [2024-11-06 09:05:09.409924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.149 [2024-11-06 09:05:09.409953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.149 qpair failed and we were unable to recover it. 00:28:56.149 [2024-11-06 09:05:09.410062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.149 [2024-11-06 09:05:09.410088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.149 qpair failed and we were unable to recover it. 00:28:56.149 [2024-11-06 09:05:09.410175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.149 [2024-11-06 09:05:09.410202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.410349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.410376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.410504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.410530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.410617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.410643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.410763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.410812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.410926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.410955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.411065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.411092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.411175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.411201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.411306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.411333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.411424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.411451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.411560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.411588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.411685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.411712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.411853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.411881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.411971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.411997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.412101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.412128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.412250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.412279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.412414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.412442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.412550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.412577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.412676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.412726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.412811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.412846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.412942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.412970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.413088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.413121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.413276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.413303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.413401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.413447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.413550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.413578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.413693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.413720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.413810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.413848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.413943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.413970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.414078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.414105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.414196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.414222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.414329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.414355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.414462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.414502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.414626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.414654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.414763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.414803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.414929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.414956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.415058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.415085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.415189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.415216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.415329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.415355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.150 qpair failed and we were unable to recover it. 00:28:56.150 [2024-11-06 09:05:09.415469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.150 [2024-11-06 09:05:09.415497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.415615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.415644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.415775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.415814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.415912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.415943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.416036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.416064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.416150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.416176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.416262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.416289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.416432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.416460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.416578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.416606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.416762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.416802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.416935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.416969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.417089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.417128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.417219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.417246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.417370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.417396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.417505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.417532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.417645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.417672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.417791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.417819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.417919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.417947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.418035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.418062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.418155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.418181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.418262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.418288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.418384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.418411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.418523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.418548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.418654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.418694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.418842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.418883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.419001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.419028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.419118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.419156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.419242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.419269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.419363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.419405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.419534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.419561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.419642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.419669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.419776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.419803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.419899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.419928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.420011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.420038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.420127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.420163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.420250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.420276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.420355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.420386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.151 [2024-11-06 09:05:09.420481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.151 [2024-11-06 09:05:09.420508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.151 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.420604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.420630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.420738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.420764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.420884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.420911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.421031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.421059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.421208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.421235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.421348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.421374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.421464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.421490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.421605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.421631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.421749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.421775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.421875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.421902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.422029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.422055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.422153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.422179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.422259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.422285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.422370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.422398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.422522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.422561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.422656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.422684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.422801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.422849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.422961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.422987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.423101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.423127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.423309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.423335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.423451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.423478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.423604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.423634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.423761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.423788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.423920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.423948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.424033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.424060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.424187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.424214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.424329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.424356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.424440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.424466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.424555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.424581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.424732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.424773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.424904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.424932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.425027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.425053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.425150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.425176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.425255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.425281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.425398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.425423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.425560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.425588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.425704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.425729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.425811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.152 [2024-11-06 09:05:09.425842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.152 qpair failed and we were unable to recover it. 00:28:56.152 [2024-11-06 09:05:09.425926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.425951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.426039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.426071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.426222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.426249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.426330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.426355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.426469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.426499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.426632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.426672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.426791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.426819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.426923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.426950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.427039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.427064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.427183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.427209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.427326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.427352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.427465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.427491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.427581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.427608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.427703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.427731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.427810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.427842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.427936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.427961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.428074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.428101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.428251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.428277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.428401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.428427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.428506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.428532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.428626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.428655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.428780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.428810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.428919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.428945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.429034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.429060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.429241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.429299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.429528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.429582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.429697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.429722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.429817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.429847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.429977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.430017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.430104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.430145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.430289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.153 [2024-11-06 09:05:09.430316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.153 qpair failed and we were unable to recover it. 00:28:56.153 [2024-11-06 09:05:09.430421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.430449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.430550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.430576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.430705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.430732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.430827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.430859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.430955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.430982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.431070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.431096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.431198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.431224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.431311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.431337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.431417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.431443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.431577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.431604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.431726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.431751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.431886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.431927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.432030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.432059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.432153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.432181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.432302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.432329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.432407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.432434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.432593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.432635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.432759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.432786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.432886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.432911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.433023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.433049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.433135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.433165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.433245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.433271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.433361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.433385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.433475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.433503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.433586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.433612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.433698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.433725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.433848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.433874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.433963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.433988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.434103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.434141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.434262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.434289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.434409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.434435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.434578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.434605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.434733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.434759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.154 [2024-11-06 09:05:09.434866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.154 [2024-11-06 09:05:09.434905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.154 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.434999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.435029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.435135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.435161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.435287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.435315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.435432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.435462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.435544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.435570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.435651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.435678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.435805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.435851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.435964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.435992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.436106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.436143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.436235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.436262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.436473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.436500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.436622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.436647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.451 [2024-11-06 09:05:09.436782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.451 [2024-11-06 09:05:09.436809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.451 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.436947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.436987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.437108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.437137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.437363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.437422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.437639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.437685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.437828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.437866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.437962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.437989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.438081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.438107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.438329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.438355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.438440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.438466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.438547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.438580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.438695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.438720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.438861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.438899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.439037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.439077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.439220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.439288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.439407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.439433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.439525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.439551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.439667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.439692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.439785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.439812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.439941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.439968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.440080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.440106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.440240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.440267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.440406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.440433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.440520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.440546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.440634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.440661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.440777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.440803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.440923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.440949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.441041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.441067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.441172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.441208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.441333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.441359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.441497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.441524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.441637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.441667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.441775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.441800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.441899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.441925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.442018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.442058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.442205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.442233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.442349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.442376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.442490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.442516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.442614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.442640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.442785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.442812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.442910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.442937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.443030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.443056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.443168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.443198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.443432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.443486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.443599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.443624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.443715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.443742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.443864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.443901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.444018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.444045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.452 qpair failed and we were unable to recover it. 00:28:56.452 [2024-11-06 09:05:09.444159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.452 [2024-11-06 09:05:09.444187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.444287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.444325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.444453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.444480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.444624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.444652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.444772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.444797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.444955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.444995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.445117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.445150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.445259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.445286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.445400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.445425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.445519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.445545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.445628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.445658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.445746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.445773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.445902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.445931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.446092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.446132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.446279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.446308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.446401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.446427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.446514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.446540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.446649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.446687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.446776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.446803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.446910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.446937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.447057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.447085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.447181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.447208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.447363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.447410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.447501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.447526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.447646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.447674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.447795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.447820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.447951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.447977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.448087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.448114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.448201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.448226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.448319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.448345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.448463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.448488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.448570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.453 [2024-11-06 09:05:09.448597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.453 qpair failed and we were unable to recover it. 00:28:56.453 [2024-11-06 09:05:09.448711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.448738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.448854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.448889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.449004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.449030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.449145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.449173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.449284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.449309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.449424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.449449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.449569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.449610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.449733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.449760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.449901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.449941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.450069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.450099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.450225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.450254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.450397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.450424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.450583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.450641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.450757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.450782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.450891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.450916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.451033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.451059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.451140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.451168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.451259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.451284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.451395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.451420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.451540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.451566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.451646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.451672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.451757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.451782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.451915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.451955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.452058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.452087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.452212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.452240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.452350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.452377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.452491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.452518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.452628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.452653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.452741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.452768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.452855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.452890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.454 qpair failed and we were unable to recover it. 00:28:56.454 [2024-11-06 09:05:09.453015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.454 [2024-11-06 09:05:09.453041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.453151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.453178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.453353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.453412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.453498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.453523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.453603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.453629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.453741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.453767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.453907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.453932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.454039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.454065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.454158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.454185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.454292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.454318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.454427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.454453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.454540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.454566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.454688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.454716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.454816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.454849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.454965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.454992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.455102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.455128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.455273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.455301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.455381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.455407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.455584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.455611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.455726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.455752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.455881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.455919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.456042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.456070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.456221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.456248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.456366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.456392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.456497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.456522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.456621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.456660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.456762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.456788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.456916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.456946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.455 qpair failed and we were unable to recover it. 00:28:56.455 [2024-11-06 09:05:09.457038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.455 [2024-11-06 09:05:09.457065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.457165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.457193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.457280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.457307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.457391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.457417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.457534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.457564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.457707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.457733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.457851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.457889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.457976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.458003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.458086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.458112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.458196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.458221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.458329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.458354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.458466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.458504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.458618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.458647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.458762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.458791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.458906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.458938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.459047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.459074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.459169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.459197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.459367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.459419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.459504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.459531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.456 qpair failed and we were unable to recover it. 00:28:56.456 [2024-11-06 09:05:09.459640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.456 [2024-11-06 09:05:09.459680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.459800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.459826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.459956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.459983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.460096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.460134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.460253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.460278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.460391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.460418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.460528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.460554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.460635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.460660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.460740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.460765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.460859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.460890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.461009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.461035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.461140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.461180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.461298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.461325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.461479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.461520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.461611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.461638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.461783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.461811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.461948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.461975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.462088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.462114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.462226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.462252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.462344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.462372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.462459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.462485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.462583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.462622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.462747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.462776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.462898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.462926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.463011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.463038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.463161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.463188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.463309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.463335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.463428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.463455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.463572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.463599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.463681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.463709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.463827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.463863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.463973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.464000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.464108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.464146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.464262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.464289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.464411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.464437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.464519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.464551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.464631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.464659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.464778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.464805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.464932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.464959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.465041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.465068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.465160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.465197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.457 [2024-11-06 09:05:09.465306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.457 [2024-11-06 09:05:09.465332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.457 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.465414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.465441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.465539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.465577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.465672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.465698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.465784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.465809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.465937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.465965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.466049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.466075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.466261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.466317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.466397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.466423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.466499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.466524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.466640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.466665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.466775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.466802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.466950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.466977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.467111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.467145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.467251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.467278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.467361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.467386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.467491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.467552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.467665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.467693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.467783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.467821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.467978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.468008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.468091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.468117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.468239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.468275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.468416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.468443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.468568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.468622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.468713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.468751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.468883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.468912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.469018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.469045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.469276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.469302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.469388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.469414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.469553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.469581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.469691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.469717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.469843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.469868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.469981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.470007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.470092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.470118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.470205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.470230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.470377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.470404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.470574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.458 [2024-11-06 09:05:09.470601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.458 qpair failed and we were unable to recover it. 00:28:56.458 [2024-11-06 09:05:09.470711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.470737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.470885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.470911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.471063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.471091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.471183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.471213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.471352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.471379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.471495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.471520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.471638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.471666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.471793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.471842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.471942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.471969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.472052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.472078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.472219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.472245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.472379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.472452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.472564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.472590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.472665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.472691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.472766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.472791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.472900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.472927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.473033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.473059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.473164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.473201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.473282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.473306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.473417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.473443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.473551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.473576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.473676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.473717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.473854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.473884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.473971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.473997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.474070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.474096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.474192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.474226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.474342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.474369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.459 qpair failed and we were unable to recover it. 00:28:56.459 [2024-11-06 09:05:09.474456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.459 [2024-11-06 09:05:09.474483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.474575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.474604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.474710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.474751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.474906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.474934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.475029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.475055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.475224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.475252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.475442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.475469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.475558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.475583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.475703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.475730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.475844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.475869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.475953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.475979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.476087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.476117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.476230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.476257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.476335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.476360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.476502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.476528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.476645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.476672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.476788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.476817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.476949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.476975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.477067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.477094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.477223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.477264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.477387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.477415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.477568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.477608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.477729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.477757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.477898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.477925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.478045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.478072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.478196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.478224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.478335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.478362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.478473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.478501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.478631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.478671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.478816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.478854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.460 [2024-11-06 09:05:09.478987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.460 [2024-11-06 09:05:09.479016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.460 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.479106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.479135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.479253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.479281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.479393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.479420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.479501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.479527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.479610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.479634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.479754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.479780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.479877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.479902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.479996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.480022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.480137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.480163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.480259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.480286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.480372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.480399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.480514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.480542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.480661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.480688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.480771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.480796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.480899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.480929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.481073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.481100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.481214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.481242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.481357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.481385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.481567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.481623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.481750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.481791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.481909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.481943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.482065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.482092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.482212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.482239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.482349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.482376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.482469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.482495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.482616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.482668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.482793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.482820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.482927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.482955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.483065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.483091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.483207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.483234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.461 [2024-11-06 09:05:09.483348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.461 [2024-11-06 09:05:09.483376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.461 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.483465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.483493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.483592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.483633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.483731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.483761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.483865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.483901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.484012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.484038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.484186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.484213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.484303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.484330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.484473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.484512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.484594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.484620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.484698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.484724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.484843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.484871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.485008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.485035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.485128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.485156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.485233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.485262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.485351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.485377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.462 [2024-11-06 09:05:09.485485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.462 [2024-11-06 09:05:09.485512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.462 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.485661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.485690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.485786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.485825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.485922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.485950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.486029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.486055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.486238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.486293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.486379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.486404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.486487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.486512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.486627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.486653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.486764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.486790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.486914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.486943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.487062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.487089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.487181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.487209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.487322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.487349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.487474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.487503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.487644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.487685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.487780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.487805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.487931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.487957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.488064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.488090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.488203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.488229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.488346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.488372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.488457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.488483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.488602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.488628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.488720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.488745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.488838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.488864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.488950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.488980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.489108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.489158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.489251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.489279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.489404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.489432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.489545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.489572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.489646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.489672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.489757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.489782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.489881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.489906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.490043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.490069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.490177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.490204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.490350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.463 [2024-11-06 09:05:09.490377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.463 qpair failed and we were unable to recover it. 00:28:56.463 [2024-11-06 09:05:09.490459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.490484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.490603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.490629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.490705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.490730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.490850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.490888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.490974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.491000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.491076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.491102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.491247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.491274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.491384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.491411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.491533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.491559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.491672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.491698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.491813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.491848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.491933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.491958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.492071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.492097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.492238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.492265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.492353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.492379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.492486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.492513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.492650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.492676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.492755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.492780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.492916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.492943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.493069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.493100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.493209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.493236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.493344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.493370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.493484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.493511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.493593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.493618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.493719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.493760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.493891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.493920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.494015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.494056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.494213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.494241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.494355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.494382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.494490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.494517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.494594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.494620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.494735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.494762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.464 [2024-11-06 09:05:09.494856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.464 [2024-11-06 09:05:09.494885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.464 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.494977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.495005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.495158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.495199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.495348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.495377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.495459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.495485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.495570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.495598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.495713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.495740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.495838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.495865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.495959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.495984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.496092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.496120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.496235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.496262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.496379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.496406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.496495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.496523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.496623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.496663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.496795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.496824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.496951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.496980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.497068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.497094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.497276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.497303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.497420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.497447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.497615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.497670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.497781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.497821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.497944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.497973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.498064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.498092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.498201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.498229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.498412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.498472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.498559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.498583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.498670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.498697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.498841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.498874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.498955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.465 [2024-11-06 09:05:09.498981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.465 qpair failed and we were unable to recover it. 00:28:56.465 [2024-11-06 09:05:09.499093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.499120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.499263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.499290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.499396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.499422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.499507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.499545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.499665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.499694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.499837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.499866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.500060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.500087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.500228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.500255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.500338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.500363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.500528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.500554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.500633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.500659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.500775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.500801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.500897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.500923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.501033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.501060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.501141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.501166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.501279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.501305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.501378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.501404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.501501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.501526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.501667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.501694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.501811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.501859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.502010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.502038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.502127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.502152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.502289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.502316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.502493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.502549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.502661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.502688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.502797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.502825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.502920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.502945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.503097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.503137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.503355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.503384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.503555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.503609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.503749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.466 [2024-11-06 09:05:09.503776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.466 qpair failed and we were unable to recover it. 00:28:56.466 [2024-11-06 09:05:09.503870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.503897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.504008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.504035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.504189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.504244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.504335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.504362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.504516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.504568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.504653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.504678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.504820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.504855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.504943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.504975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.505089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.505116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.505257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.505283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.505399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.505427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.505506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.505531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.505637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.505664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.505754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.505778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.505862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.505890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.506022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.506062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.506185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.506214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.506355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.506382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.506493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.506519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.506606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.506632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.506742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.506768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.506879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.506905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.506994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.507021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.507139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.507165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.507275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.507302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.507398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.507425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.507509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.507534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.507626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.507652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.507788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.507815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.467 qpair failed and we were unable to recover it. 00:28:56.467 [2024-11-06 09:05:09.507911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.467 [2024-11-06 09:05:09.507936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.508028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.508054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.508167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.508193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.508274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.508299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.508383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.508408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.508522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.508556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.508647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.508674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.508810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.508855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.508975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.509001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.509112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.509138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.509254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.509281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.509388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.509414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.509499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.509525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.509646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.509686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.509789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.509829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.509940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.509970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.510118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.510146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.510281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.510308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.510423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.510450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.510600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.510629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.510742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.468 [2024-11-06 09:05:09.510768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.468 qpair failed and we were unable to recover it. 00:28:56.468 [2024-11-06 09:05:09.510851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.510877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.510983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.511009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.511089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.511114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.511226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.511252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.511370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.511397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.511486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.511517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.511610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.511650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.511763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.511792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.511914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.511942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.512055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.512083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.512177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.512205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.512290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.512323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.512434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.512462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.512592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.512633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.512759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.512789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.512886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.512914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.513000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.513028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.513173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.513231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.513424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.513451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.513566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.513594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.513712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.513739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.513852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.513880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.513985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.514011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.514129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.514157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.514243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.514269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.514394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.514422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.514539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.514566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.514685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.514711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.469 [2024-11-06 09:05:09.514844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.469 [2024-11-06 09:05:09.514885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.469 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.515045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.515086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.515205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.515233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.515377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.515405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.515514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.515541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.515626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.515652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.515739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.515768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.515862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.515888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.515988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.516028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.516145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.516173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.516256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.516282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.516384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.516454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.516566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.516593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.516708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.516735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.516824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.516862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.516966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.516993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.517109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.517136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.517219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.517245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.517330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.517356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.517446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.517474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.517589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.517617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.517704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.517730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.517881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.517920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.518043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.518071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.518187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.518215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.518303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.518331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.518428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.518456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.518582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.518622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.518768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.518797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.518922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.518950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.519069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.519096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.470 qpair failed and we were unable to recover it. 00:28:56.470 [2024-11-06 09:05:09.519212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.470 [2024-11-06 09:05:09.519239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.519359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.519386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.519467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.519493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.519600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.519627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.519711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.519737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.519860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.519889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.519982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.520009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.520099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.520126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.520238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.520265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.520347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.520373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.520512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.520540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.520626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.520651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.520786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.520812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.520954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.520981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.521096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.521123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.521208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.521233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.521340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.521368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.521474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.521501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.521604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.521645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.521764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.521796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.521900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.521929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.522013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.522040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.522270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.522331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.522545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.522602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.522719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.522746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.522883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.522910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.522999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.523024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.523141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.523168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.523252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.523277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.523366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.523393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.523508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.523535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.523645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.471 [2024-11-06 09:05:09.523672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.471 qpair failed and we were unable to recover it. 00:28:56.471 [2024-11-06 09:05:09.523821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.523873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.523970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.523999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.524109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.524135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.524246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.524273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.524354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.524379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.524517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.524543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.524652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.524679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.524774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.524815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.524955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.524996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.525096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.525125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.525211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.525238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.525325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.525353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.525468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.525497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.525613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.525639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.525775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.525811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.525937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.525965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.526078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.526104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.526220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.526247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.526430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.526457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.526555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.526584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.526672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.526700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.526815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.526854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.526970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.526997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.527113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.527142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.527284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.527311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.527480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.527549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.527670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.527697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.527810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.527843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.527959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.527987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.528101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.528128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.528238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.472 [2024-11-06 09:05:09.528265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.472 qpair failed and we were unable to recover it. 00:28:56.472 [2024-11-06 09:05:09.528351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.528378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.528522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.528548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.528638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.528667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.528753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.528782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.528873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.528899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.528976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.529001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.529106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.529133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.529246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.529273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.529383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.529410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.529533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.529561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.529690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.529730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.529853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.529881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.530031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.530058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.530143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.530170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.530280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.530307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.530423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.530451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.530543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.530569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.530684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.530711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.530794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.530820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.530974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.531003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.531089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.531116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.531210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.531238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.531325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.531353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.531446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.531481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.531594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.531622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.531768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.531795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.531883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.531909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.473 qpair failed and we were unable to recover it. 00:28:56.473 [2024-11-06 09:05:09.531994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.473 [2024-11-06 09:05:09.532021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.532103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.532129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.532242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.532270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.532377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.532405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.532494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.532520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.532598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.532623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.532734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.532763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.532859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.532888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.532999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.533027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.533100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.533125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.533291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.533345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.533564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.533592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.533678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.533704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.533825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.533876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.533968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.533996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.534089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.534116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.534235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.534261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.534347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.534373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.534514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.534541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.534657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.534683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.534769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.534800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.534899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.534927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.535042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.535069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.535189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.474 [2024-11-06 09:05:09.535223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.474 qpair failed and we were unable to recover it. 00:28:56.474 [2024-11-06 09:05:09.535335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.535362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.535455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.535484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.535624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.535652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.535733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.535760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.535894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.535922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.536036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.536063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.536147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.536175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.536271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.536300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.536444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.536472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.536588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.536614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.536727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.536753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.536846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.536874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.536960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.536987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.537072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.537099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.537210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.537236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.537356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.537383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.537469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.537495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.537606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.537633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.537718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.537746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.537835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.537861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.537971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.537999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.538119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.538148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.538290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.538317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.538406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.538433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.538527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.538555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.538655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.538695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.538856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.538885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.538975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.539002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.539141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.539167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.539273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.539300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.539435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.539462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.539600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.539627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.539724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.539765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.475 [2024-11-06 09:05:09.539867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.475 [2024-11-06 09:05:09.539897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.475 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.540019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.540048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.540144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.540171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.540261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.540289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.540499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.540555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.540673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.540700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.540841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.540874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.540981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.541008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.541090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.541117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.541197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.541223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.541341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.541368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.541482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.541510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.541622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.541649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.541742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.541772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.541887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.541914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.542011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.542038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.542126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.542153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.542268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.542295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.542410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.542437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.542551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.542579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.542707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.542735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.542838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.542867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.542953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.542979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.543067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.543095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.543237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.543264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.543351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.543378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.543461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.543488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.543628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.543655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.543744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.543773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.543869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.543896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.544008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.544035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.544112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.544137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.544282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.544310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.476 qpair failed and we were unable to recover it. 00:28:56.476 [2024-11-06 09:05:09.544404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.476 [2024-11-06 09:05:09.544432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.544514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.544542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.544658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.544685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.544820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.544866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.544988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.545016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.545105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.545133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.545316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.545343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.545432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.545460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.545540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.545565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.545681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.545708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.545795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.545822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.545942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.545969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.546054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.546081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.546166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.546197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.546312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.546340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.546446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.546473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.546586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.546613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.546701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.546731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.546851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.546879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.546993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.547020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.547158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.547185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.547299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.547326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.547405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.547430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.547538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.547565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.547677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.547705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.547869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.547909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.548036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.548064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.548215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.548242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.548354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.548381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.548543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.477 [2024-11-06 09:05:09.548598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.477 qpair failed and we were unable to recover it. 00:28:56.477 [2024-11-06 09:05:09.548712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.548739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.548842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.548871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.549018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.549046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.549161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.549187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.549302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.549329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.549481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.549521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.549645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.549673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.549788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.549816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.549935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.549962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.550044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.550069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.550187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.550214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.550358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.550384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.550517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.550560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.550695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.550722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.550845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.550873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.550956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.550981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.551098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.551124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.551198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.551223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.551333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.551359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.551441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.551468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.551543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.551568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.551671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.551711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.551806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.551847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.551972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.552004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.552095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.552122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.552271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.552298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.552415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.552442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.552560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.552587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.552671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.552698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.552827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.552860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.552973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.552999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.553106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.553133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.553229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.553256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.553371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.553397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.478 [2024-11-06 09:05:09.553502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.478 [2024-11-06 09:05:09.553528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.478 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.553652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.553679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.553793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.553821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.553956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.553997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.554089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.554117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.554207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.554234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.554344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.554371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.554504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.554531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.554615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.554642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.554734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.554761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.554884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.554913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.555025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.555052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.555157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.555183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.555273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.555298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.555424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.555453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.555575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.555604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.555719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.555750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.555851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.555878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.555964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.555991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.556106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.556133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.556220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.556248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.556368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.556423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.556515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.556541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.556649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.556676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.556817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.556850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.556934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.556959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.557047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.557073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.557190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.557217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.557326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.557353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.557493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.557520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.557649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.557676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.479 [2024-11-06 09:05:09.557770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.479 [2024-11-06 09:05:09.557797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.479 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.557881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.557906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.558014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.558041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.558124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.558151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.558234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.558260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.558373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.558400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.558512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.558539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.558648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.558689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.558842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.558882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.558984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.559023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.559172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.559201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.559287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.559314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.559401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.559434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.559521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.559549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.559664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.559691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.559801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.559828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.559918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.559945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.560064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.560090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.560175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.560200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.560310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.560336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.560424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.560450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.560593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.560634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.560758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.560788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.560911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.560939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.561027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.561054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.561192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.561220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.561329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.561357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.561441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.561470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.561591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.561618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.561757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.561784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.561901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.561929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.562047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.562073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.562216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.562242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.562323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.562347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.562553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.562624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.562744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.562770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.562912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.562941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.563056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.563083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.563175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.563202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.563324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.563352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.563465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.563491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.563636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.563662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.563774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.563800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.563922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.563949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.564061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.564087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.564206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.564232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.564339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.564365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.564449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.564475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.564568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.564594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.564667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.564692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.564821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.564888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.564995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.565035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.480 [2024-11-06 09:05:09.565150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.480 [2024-11-06 09:05:09.565178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.480 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.565371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.565400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.565616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.565675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.565778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.565818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.565958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.565986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.566132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.566159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.566304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.566331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.566497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.566560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.566643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.566668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.566784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.566811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.566920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.566960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.567062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.567091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.567195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.567262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.567343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.567370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.567492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.567532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.567650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.567678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.567766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.567793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.567883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.567908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.568002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.568029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.568118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.568145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.568227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.568252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.568389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.568416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.568534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.568560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.568648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.568674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.568785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.568812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.568914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.568943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.569038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.569067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.569141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.569166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.569279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.569306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.569386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.569411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.569522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.569549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.569635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.569662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.569782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.569811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.569927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.569968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.570112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.570140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.570255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.570282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.570395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.570422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.570600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.570627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.570706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.570733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.570874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.570901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.571049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.571076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.571198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.571226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.571366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.571392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.571484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.571511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.571627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.571653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.571744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.571770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.571910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.571937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.572030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.572057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.572161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.572187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.572303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.572329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.572471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.572500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.572626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.481 [2024-11-06 09:05:09.572653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.481 qpair failed and we were unable to recover it. 00:28:56.481 [2024-11-06 09:05:09.572733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.572760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.572900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.572928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.573069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.573101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.573215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.573242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.573382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.573409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.573547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.573573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.573694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.573727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.573820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.573851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.573943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.573969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.574076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.574101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.574182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.574208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.574295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.574321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.574426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.574452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.574565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.574591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.574703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.574728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.574816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.574847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.574941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.574967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.575081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.575107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.575248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.575273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.575352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.575378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.575494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.575520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.575649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.575688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.575804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.575839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.575952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.575979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.576071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.576098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.576194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.576222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.576404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.576457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.576627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.576697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.576817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.576848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.576958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.576988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.577078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.577102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.577314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.577371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.577555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.577614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.577725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.577751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.577857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.577884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.577978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.578003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.578092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.578118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.578196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.578220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.578336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.578362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.578466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.578492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.578609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.578645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.578754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.578779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.578873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.578900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.579020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.579047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.579182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.579208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.579326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.579351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.579460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.579485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.579600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.579626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.579712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.579736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.579843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.579885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.580002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.580043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.580171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.580200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.580279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.580307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.580411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.580438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.580518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.580545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.580658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.580685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.580774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.580819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.580945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.580974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.581171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.581198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.581283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.581310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.581485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.581536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.581662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.581689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.581805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.581843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.581933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.581960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.582041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.582067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.482 qpair failed and we were unable to recover it. 00:28:56.482 [2024-11-06 09:05:09.582184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.482 [2024-11-06 09:05:09.582211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.582325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.582352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.582431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.582458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.582573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.582600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.582714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.582740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.582862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.582889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.582997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.583026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.583148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.583189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.583321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.583350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.583443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.583470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.583616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.583642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.583762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.583791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.583916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.583945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.584040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.584068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.584207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.584232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.584352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.584377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.584487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.584512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.584648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.584674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.584780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.584826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.584951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.584979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.585100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.585130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.585247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.585273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.585360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.585387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.585502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.585529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.585666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.585693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.585805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.585838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.585955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.585983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.586106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.586134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.586251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.586277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.586372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.586400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.586508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.586535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.586649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.586676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.586766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.586794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.586919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.586946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.587034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.587061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.587153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.587181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.587267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.587293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.587376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.587404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.587521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.587548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.587683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.587724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.587842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.587872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.587965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.587993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.588080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.588107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.588254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.588442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.588503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.588623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.588649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.588728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.588752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.588860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.588887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.588980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.589006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.589142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.589169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.589246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.589271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.589373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.589400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.589512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.589538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.589646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.589672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.589745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.589769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.589909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.589935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.590052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.590077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.590158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.590185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.590301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.590326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.590441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.590468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.590544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.590570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.590672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.590698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.590817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.590854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.590939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.590966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.591062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.591089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.591206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.591232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.591380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.591407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.591512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.591539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.591659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.591687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.591853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.591895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.592019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.592047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.592207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.592277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.592375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.592403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.483 [2024-11-06 09:05:09.592499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.483 [2024-11-06 09:05:09.592526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.483 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.592603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.592630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.592728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.592768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.592911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.592939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.593047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.593073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.593160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.593185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.593352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.593404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.593549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.593602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.593732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.593773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.593894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.593923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.594011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.594038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.594115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.594141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.594228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.594260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.594367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.594395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.594499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.594527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.594643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.594672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.594797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.594844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.594948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.594976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.595063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.595089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.595233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.595259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.595416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.595474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.595591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.595617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.595747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.595786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.595904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.595932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.596011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.596035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.596206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.596259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.596409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.596461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.596600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.596658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.596749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.596777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.596919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.596947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.597037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.597065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.597147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.597176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.597289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.597316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.597424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.597452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.597541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.597569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.597653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.597679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.597787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.597828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.597986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.598015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.598109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.598137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.598221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.598252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.598437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.598491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.598581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.598607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.598682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.598707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.598821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.598855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.598933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.598959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.599064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.599100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.599228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.599255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.599359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.599397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.599547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.599573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.599664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.599690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.599763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.599788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.599878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.599904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.599987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.600012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.600103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.600130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.600268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.600295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.600429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.600456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.600612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.600652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.600798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.600827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.600951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.600981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.601066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.601091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.601203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.601230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.601372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.601399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.601514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.601542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.601640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.601667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.601784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.601812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.601921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.601949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.602064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.602090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.602228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.602254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.602401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.602428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.602561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.602588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.602722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.602763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.602890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.602919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.484 [2024-11-06 09:05:09.603008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.484 [2024-11-06 09:05:09.603035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.484 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.603119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.603146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.603227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.603255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.603403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.603455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.603572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.603599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.603718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.603760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.603907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.603936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.604050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.604083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.604194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.604221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.604336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.604363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.604446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.604475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.604587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.604616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.604738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.604778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.604879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.604907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.604999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.605025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.605107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.605133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.605227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.605255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.605343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.605371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.605478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.605505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.605619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.605646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.605734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.605760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.605853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.605882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.605971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.605999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.606136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.606162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.606268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.606295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.606410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.606435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.606581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.606608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.606686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.606711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.606828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.606860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.606972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.607001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.607088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.607114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.607219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.607247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.607334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.607361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.607502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.607530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.607648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.607682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.607777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.607818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.607926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.607955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.608073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.608101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.608186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.608212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.608303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.608330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.608427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.608455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.608591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.608631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.608758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.608786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.608911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.608938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.609049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.609075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.609195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.609221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.609325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.609352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.609487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.609513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.609644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.609685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.609807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.609841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.609960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.609987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.610095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.610122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.610228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.610255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.610364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.610391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.610477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.610505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.610601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.610631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.610750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.610777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.610864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.610891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.611007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.611033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.611118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.611147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.611258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.611284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.611376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.611404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.611498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.611537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.611632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.611660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.611748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.611776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.611888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.611915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.612052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.612079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.612161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.612187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.612303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.612329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.485 qpair failed and we were unable to recover it. 00:28:56.485 [2024-11-06 09:05:09.612444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.485 [2024-11-06 09:05:09.612473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.612589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.612618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.612740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.612768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.612860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.612886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.612998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.613025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.613146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.613177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.613260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.613287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.613473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.613529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.613668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.613694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.613805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.613837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.613953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.613982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.614099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.614125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.614304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.614357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.614503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.614554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.614665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.614703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.614824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.614858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.614941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.614966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.615060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.615089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.615170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.615197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.615410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.615469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.615615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.615674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.615799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.615827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.615974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.616001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.616141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.616167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.616283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.616310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.616432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.616458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.616552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.616578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.616688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.616714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.616829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.616860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.616975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.617001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.617111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.617137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.617246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.617273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.617376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.617416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.617517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.617545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.617664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.617690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.617804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.617837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.617920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.617946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.618036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.618064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.618145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.618171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.618246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.618272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.618350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.618376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.618514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.618540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.618653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.618679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.618764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.618789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.618881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.618909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.618990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.619017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.619171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.619212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.619337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.619366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.619487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.619527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.619622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.619650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.619770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.619796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.619948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.619977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.620065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.620093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.620211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.620237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.620349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.620376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.620472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.620499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.620638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.620664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.620786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.620812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.620927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.620954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.621067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.621093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.621191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.621218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.621310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.621340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.621426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.621454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.621538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.621565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.621672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.621698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.621812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.621845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.621962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.621990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.622102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.622130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.622246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.622272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.622382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.622409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.622501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.622529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.622654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.622694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.622813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.622855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.486 qpair failed and we were unable to recover it. 00:28:56.486 [2024-11-06 09:05:09.622970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.486 [2024-11-06 09:05:09.622998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.623086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.623114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.623206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.623232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.623344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.623372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.623486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.623514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.623645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.623685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.623777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.623805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.623936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.623963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.624056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.624083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.624168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.624195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.624282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.624309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.624396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.624423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.624535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.624562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.624704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.624732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.624845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.624873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.624953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.624979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.625090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.625116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.625211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.625239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.625317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.625344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.625428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.625457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.625553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.625580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.625706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.625738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.625880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.625909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.625989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.626016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.626238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.626295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.626509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.626564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.626675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.626703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.626815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.626848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.626941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.626969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.627045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.627072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.627250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.627302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.627467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.627524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.627638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.627666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.627778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.627806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.627953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.627981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.628068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.628095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.628251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.628302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.628508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.628563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.628683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.628710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.628826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.628871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.628967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.628994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.629138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.629164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.629309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.629359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.629528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.629586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.629723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.629749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.629852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.629894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.629996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.630024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.630123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.630150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.630233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.630260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.630335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.630361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.630475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.630503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.630588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.630614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.630723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.630749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.630841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.630871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.630985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.631012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.631128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.631154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.631229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.631254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.631370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.631397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.631529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.631569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.631691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.631720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.631801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.631830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.631957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.631984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.632105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.632131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.632221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.632248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.632339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.632367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.632447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.632474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.632567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.487 [2024-11-06 09:05:09.632596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.487 qpair failed and we were unable to recover it. 00:28:56.487 [2024-11-06 09:05:09.632683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.632710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.632836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.632866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.632960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.632987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.633069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.633096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.633241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.633292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.633476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.633535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.633625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.633652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.633765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.633792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.633913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.633940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.634035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.634063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.634184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.634241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.634333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.634359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.634522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.634582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.634696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.634732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.634878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.634918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.635033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.635061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.635236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.635291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.635501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.635556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.635671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.635700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.635782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.635807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.635926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.635952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.636042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.636069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.636179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.636206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.636282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.636307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.636400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.636429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.636562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.636613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.636736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.636772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.636880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.636906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.637019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.637046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.637127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.637153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.637234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.637261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.637379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.637406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.637544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.637571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.637687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.637715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.637803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.637837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.637954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.637981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.638144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.638196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.638280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.638307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.638452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.638506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.638590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.638623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.638729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.638755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.638873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.638913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.639002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.639031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.639119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.639146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.639234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.639261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.639352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.639380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.639493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.639521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.639639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.639666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.639758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.639784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.639868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.639894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.640009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.640035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.640149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.640175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.640285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.640311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.640435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.640461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.640537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.640562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.640677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.640702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.640812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.640845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.640960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.640986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.641062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.641087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.641200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.641226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.641300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.641325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.641456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.641496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.641613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.641643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.641761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.641788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.641910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.641938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.642029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.642056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.642174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.642206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.642321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.642349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.642466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.642496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.642611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.642639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.642785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.642812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.642934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.642962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.643075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.488 [2024-11-06 09:05:09.643102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.488 qpair failed and we were unable to recover it. 00:28:56.488 [2024-11-06 09:05:09.643216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.643243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.643385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.643412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.643614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.643653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.643776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.643804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.643907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.643936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.644028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.644054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.644159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.644186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.644328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.644354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.644510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.644567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.644655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.644682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.644796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.644851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.644982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.645012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.645153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.645211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.645324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.645354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.645485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.645514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.645601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.645631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.645745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.645772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.645917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.645945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.646053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.646080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.646220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.646246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.646335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.646363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.646451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.646480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.646600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.646640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.646732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.646760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.646854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.646881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.646972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.646997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.647103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.647130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.647243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.647269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.647384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.647411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.647498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.647527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.647612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.647640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.647747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.647774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.647913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.647941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.648029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.648056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.648183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.648210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.648333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.648386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.648510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.648537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.648625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.648652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.648776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.648804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.648893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.648920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.649044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.649085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.649210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.649239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.649406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.649459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.649662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.649690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.649828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.649860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.649947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.649974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.650087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.650115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.650236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.650264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.650441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.650502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.650587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.650612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.650730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.650756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.650851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.650880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.650962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.650989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.651110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.651150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.651246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.651273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.651396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.651425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.651508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.651537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.651642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.651668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.651755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.651783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.651917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.651958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.652075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.652108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.652227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.652253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.652471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.652497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.652589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.652615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.652733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.652760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.489 [2024-11-06 09:05:09.652900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.489 [2024-11-06 09:05:09.652928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.489 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.653010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.653036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.653120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.653146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.653260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.653286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.653427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.653452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.653560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.653587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.653666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.653691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.653774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.653800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.653917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.653943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.654028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.654053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.654159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.654185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.654300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.654327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.654405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.654434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.654596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.654637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.654760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.654789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.654912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.654940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.655029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.655057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.655173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.655200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.655314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.655341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.655430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.655457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.655574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.655601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.655719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.655747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.655897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.655931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.656024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.656051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.656156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.656183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.656262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.656288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.656372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.656400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.656484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.656513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.656625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.656653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.656793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.656820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.656922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.656951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.657068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.657095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.657245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.657300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.657486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.657513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.657616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.657642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.657760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.657786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.657910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.657939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.658083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.658110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.658282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.658338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.658484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.658540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.658684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.658713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.658801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.658826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.658919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.658945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.659029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.659057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.659170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.659197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.659312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.659338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.659456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.659482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.659622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.659650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.659740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.659767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.659890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.659918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.660034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.660061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.660148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.660173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.660256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.660283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.660404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.660432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.660510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.660536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.660656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.660683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.660816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.660868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.660985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.661014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.661123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.661150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.661308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.661362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.661487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.661542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.661657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.661685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.661771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.661804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.661905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.661932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.662049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.662076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.662189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.662215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.662328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.662354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.662444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.662474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.662588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.662615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.662754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.662781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.662899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.662926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.663062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.663103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.490 qpair failed and we were unable to recover it. 00:28:56.490 [2024-11-06 09:05:09.663258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.490 [2024-11-06 09:05:09.663298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.663398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.663427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.663638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.663700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.663812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.663844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.663935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.663962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.664043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.664070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.664184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.664211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.664345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.664372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.664461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.664488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.664583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.664609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.664711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.664752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.664872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.664900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.665021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.665048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.665163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.665189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.665272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.665298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.665394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.665420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.665563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.665590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.665671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.665701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.665810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.665843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.665923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.665948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.666059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.666085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.666188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.666215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.666304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.666333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.666430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.666470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.666626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.666666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.666763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.666792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.666912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.666941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.667035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.667062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.667182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.667210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.667297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.667326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.667408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.667434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.667529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.667558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.667669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.667697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.667813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.667845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.667932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.667959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.668077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.668105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.668195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.668222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.668338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.668365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.668458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.668486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.668600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.668631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.668788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.668817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.668938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.668966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.669163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.669189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.669328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.669355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.669505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.669568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.669673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.669699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.669787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.669814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.669940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.669966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.670110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.670137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.670242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.670269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.670367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.670396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.670515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.670543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.670657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.670684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.670825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.670859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.670976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.671004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.671080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.671105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.671221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.671249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.671327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.671357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.671455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.671495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.671619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.671649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.671749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.671790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.671909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.671937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.672026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.672052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.672159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.672223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.672411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.672470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.672563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.672591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.672688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.672715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.672799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.672827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.672932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.672959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.673066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.673093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.673176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.673204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.673299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.673327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.673450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.491 [2024-11-06 09:05:09.673480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.491 qpair failed and we were unable to recover it. 00:28:56.491 [2024-11-06 09:05:09.673570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.673599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.673681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.673706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.673821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.673855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.673945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.673969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.674118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.674144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.674347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.674402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.674622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.674675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.674791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.674817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.674909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.674938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.675047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.675088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.675250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.675331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.675503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.675556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.675636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.675660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.675746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.675772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.675856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.675880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.675970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.675996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.676072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.676096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.676175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.676201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.676319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.676346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.676452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.676477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.676562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.676588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.676674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.676700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.676779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.676806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.676922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.676948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.677060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.677086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.677207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.677233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.677319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.677345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.677432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.677458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.677577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.677618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.677704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.677733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.677867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.677907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.678003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.678032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.678147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.678175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.678255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.678282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.678390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.678418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.678525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.678565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.678649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.678678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.678759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.678786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.678910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.678938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.679021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.679046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.679181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.679207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.679337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.679411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.679526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.679552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.679662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.679688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.679778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.679805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.679921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.679948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.680030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.680059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.680198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.680226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.680342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.680369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.680477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.680505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.680615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.680642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.680769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.492 [2024-11-06 09:05:09.680809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.492 qpair failed and we were unable to recover it. 00:28:56.492 [2024-11-06 09:05:09.680945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.680972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.681115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.681144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.681231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.681259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.681364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.681391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.681542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.681595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.681686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.681714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.681852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.681892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.681993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.682022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.682186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.682244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.682320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.682346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.682511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.682568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.682686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.682715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.682841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.682870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.682991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.683018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.683157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.683186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.683314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.683372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.683578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.683605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.683686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.683712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.683818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.683849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.683949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.683975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.684058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.684082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.684162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.684186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.684273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.684299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.684388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.684415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.684525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.684551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.684681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.684721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.684846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.684880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.685037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.685077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.685201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.685231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.685322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.685349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.685438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.685466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.685555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.685582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.685696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.685722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.685842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.685869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.685969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.685996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.686110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.686136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.686222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.686248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.686357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.686383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.686463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.686490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.686569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.686594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.686747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.686777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.686888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.686929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.687054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.687082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.687194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.687222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.687307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.687334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.687448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.687477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.687566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.687594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.687701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.687741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.687863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.687893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.688012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.688040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.688157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.493 [2024-11-06 09:05:09.688185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.493 qpair failed and we were unable to recover it. 00:28:56.493 [2024-11-06 09:05:09.688279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.688307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.688427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.688456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.688544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.688575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.688716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.688742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.688881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.688908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.689018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.689044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.689225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.689251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.689433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.689459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.689543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.689571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.689659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.689688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.689803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.689838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.689955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.689994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.690097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.690122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.690228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.690256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.690367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.690395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.690512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.690540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.690670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.690710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.690860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.690888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.691003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.691028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.691166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.691227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.691305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.691329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.691491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.691544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.691661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.691686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.691777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.691802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.691889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.691919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.692037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.692064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.692143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.692168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.692369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.692397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.692484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.692512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.692642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.692675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.692766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.692795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.692900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.692926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.693011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.693037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.693141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.693167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.693256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.693282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.693372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.693411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.693508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.693538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.693638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.693678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.693809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.693849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.693966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.693995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.694156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.694208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.694298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.694325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.694518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.694583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.694697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.694725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.694845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.694873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.694974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.695001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.695091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.695117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.695230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.695266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.695357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.695383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.695530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.695557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.494 qpair failed and we were unable to recover it. 00:28:56.494 [2024-11-06 09:05:09.695684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.494 [2024-11-06 09:05:09.695724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.695818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.695858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.695986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.696016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.696136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.696163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.696277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.696303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.696432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.696480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.696602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.696629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.696761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.696803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.696928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.696955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.697066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.697092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.697208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.697233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.697371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.697397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.697484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.697514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.697627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.697654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.697747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.697776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.697885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.697914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.698033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.698061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.698179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.698206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.698324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.698352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.698488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.698519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.698626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.698653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.698737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.698766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.698853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.698880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.698969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.698996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.699136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.699164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.699252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.699281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.699371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.699399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.699479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.699506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.699604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.699644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.699753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.699793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.699887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.699917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.700038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.700065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.700160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.700187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.700358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.700414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.700577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.700638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.700756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.700786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.700916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.700943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.701057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.701083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.701202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.701229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.701398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.701457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.701650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.701680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.701810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.701859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.701966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.701996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.702119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.702146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.702231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.702257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.495 [2024-11-06 09:05:09.702348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.495 [2024-11-06 09:05:09.702374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.495 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.702459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.702492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.702580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.702610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.702693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.702721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.702829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.702862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.702948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.702975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.703096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.703123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.703201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.703227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.703320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.703348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.703471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.703512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.703637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.703666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.703785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.703814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.703971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.703999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.704118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.704145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.704230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.704257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.704420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.704476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.704567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.704594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.704671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.704696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.704784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.704812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.704928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.704954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.705060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.705100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.705190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.705218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.705417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.705444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.705559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.705585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.705663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.705691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.705817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.705857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.705979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.706007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.706127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.706155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.706255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.706284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.706424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.706451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.706570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.706598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.706738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.706765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.706862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.706889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.706977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.707004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.707135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.707175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.707327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.707390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.707547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.496 [2024-11-06 09:05:09.707607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.496 qpair failed and we were unable to recover it. 00:28:56.496 [2024-11-06 09:05:09.707717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.707745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.707858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.707898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.707986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.708014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.708125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.708152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.708315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.708377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.708542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.708594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.708694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.708723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.708806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.708839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.708928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.708958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.709047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.777 [2024-11-06 09:05:09.709074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.777 qpair failed and we were unable to recover it. 00:28:56.777 [2024-11-06 09:05:09.709214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.709266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.709400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.709456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.709543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.709569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.709656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.709684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.709785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.709824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.709923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.709951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.710038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.710062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.710148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.710175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.710290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.710317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.710399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.710424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.710514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.710539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.710653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.710679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.710768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.710798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.710901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.710930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.711013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.711040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.711152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.711180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.711270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.711297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.711398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.711439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.711526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.711554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.711638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.711664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.711749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.711778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.711889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.711920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.712066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.712092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.712246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.712292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.712378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.712404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.712557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.712596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.712682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.712712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.712797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.712825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.778 [2024-11-06 09:05:09.712977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.778 [2024-11-06 09:05:09.713005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.778 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.713229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.713302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.713399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.713427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.713614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.713643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.713767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.713795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.713919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.713959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.714061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.714088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.714198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.714263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.714357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.714381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.714564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.714610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.714700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.714724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.714807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.714840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.714960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.714992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.715118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.715147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.715237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.715264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.715373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.715403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.715488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.715516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.715647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.715687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.715806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.715844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.715944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.715972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.716055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.716082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.716297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.716353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.716512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.716578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.716717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.716743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.716864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.716892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.717037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.717066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.717225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.717278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.779 [2024-11-06 09:05:09.717439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.779 [2024-11-06 09:05:09.717485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.779 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.717604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.717632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.717755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.717792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.717886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.717912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.717995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.718021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.718138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.718169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.718289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.718323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.718438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.718464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.718558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.718587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.718667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.718693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.718786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.718815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.718960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.718987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.719100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.719127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.719205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.719231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.719344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.719372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.719465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.719492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.719592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.719631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.719778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.719805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.719947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.719988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.720076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.720105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.720258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.720318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.720504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.720567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.720647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.720672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.720789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.720818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.720915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.720943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.721083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.721111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.721205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.721230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.721389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.721457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.721578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.721647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.780 qpair failed and we were unable to recover it. 00:28:56.780 [2024-11-06 09:05:09.721733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.780 [2024-11-06 09:05:09.721759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.721891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.721923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.722037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.722068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.722179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.722211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.722333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.722364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.722509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.722535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.722633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.722674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.722796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.722826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.722946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.722974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.723067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.723094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.723177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.723205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.723346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.723373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.723459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.723484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.723615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.723655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.723790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.723843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.723974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.724002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.724098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.724125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.724209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.724235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.724468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.724530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.724647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.724674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.724786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.724813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.724911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.724938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.725020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.725045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.725136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.725162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.725253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.725279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.725360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.725386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.725504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.725530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.725633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.725659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.781 [2024-11-06 09:05:09.725801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.781 [2024-11-06 09:05:09.725830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.781 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.725937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.725969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.726079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.726108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.726259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.726286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.726365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.726391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.726483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.726512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.726660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.726688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.726841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.726869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.726978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.727005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.727113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.727140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.727284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.727311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.727477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.727541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.727662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.727690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.727815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.727863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.727954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.727983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.728078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.728106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.728225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.728258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.728344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.728372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.728457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.728484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.728623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.728651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.728777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.728817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.728913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.728941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.729048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.729076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.729166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.729193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.729330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.782 [2024-11-06 09:05:09.729358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.782 qpair failed and we were unable to recover it. 00:28:56.782 [2024-11-06 09:05:09.729444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.729473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.729559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.729587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.729683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.729709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.729791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.729818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.729964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.729991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.730119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.730149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.730291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.730320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.730459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.730518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.730635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.730668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.730749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.730776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.730895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.730923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.731009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.731037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.731118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.731146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.731235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.731261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.731345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.731372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.731476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.731545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.731627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.731654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.731787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.731816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.731914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.731943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.732088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.732115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.732201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.732227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.732375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.732433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.732552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.732584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.732704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.732730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.732850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.732889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.733014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.733041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.733151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.733189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.733302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.733327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.783 qpair failed and we were unable to recover it. 00:28:56.783 [2024-11-06 09:05:09.733403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.783 [2024-11-06 09:05:09.733429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.733511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.733539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.733639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.733667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.733807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.733850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.733962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.733989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.734073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.734098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.734176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.734203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.734288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.734315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.734449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.734476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.734587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.734613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.734720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.734747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.734863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.734891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.734990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.735017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.735099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.735127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.735212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.735239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.735341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.735368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.735475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.735503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.735617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.735643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.735737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.735777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.735882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.735913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.736056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.736084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.736190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.736217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.736327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.736353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.736440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.736467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.736554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.736582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.736676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.736717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.736865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.736894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.736987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.737013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.737127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.737153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.737238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.737265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.737404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.737436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.737515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.737541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.737660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.737685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.784 [2024-11-06 09:05:09.737801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.784 [2024-11-06 09:05:09.737837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.784 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.737924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.737952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.738071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.738111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.738209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.738237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.738396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.738451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.738635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.738690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.738770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.738797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.738916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.738945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.739032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.739059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.739167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.739194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.739337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.739392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.739481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.739508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.739618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.739656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.739755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.739781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.739899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.739927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.740013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.740042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.740152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.740179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.740332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.740386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.740580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.740608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.740721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.740748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.740885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.740927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.741047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.741077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.741200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.741229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.741344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.741371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.741493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.741520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.741632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.741659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.741767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.741793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.741894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.741925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.742043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.742069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.742191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.742217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.742338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.742364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.742469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.742497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.742629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.785 [2024-11-06 09:05:09.742657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.785 qpair failed and we were unable to recover it. 00:28:56.785 [2024-11-06 09:05:09.742801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.742828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.742951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.742977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.743117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.743144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.743237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.743264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.743388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.743447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.743644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.743696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.743809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.743842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.743988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.744016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.744221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.744249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.744435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.744485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.744694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.744753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.744840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.744866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.744948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.744975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.745061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.745086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.745231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.745293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.745513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.745568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.745714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.745741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.745825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.745862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.745981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.746008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.746121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.746146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.746258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.746284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.746366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.746390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.746474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.746500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.746630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.746670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.746797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.746826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.746930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.746958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.747049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.747076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.747188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.747215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.747302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.747330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.747413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.747441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.747527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.747555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.747662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.747695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.747784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.747811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.747929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.786 [2024-11-06 09:05:09.747957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.786 qpair failed and we were unable to recover it. 00:28:56.786 [2024-11-06 09:05:09.748073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.748101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.748210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.748237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.748328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.748356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.748464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.748491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.748632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.748660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.748743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.748769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.748852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.748879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.748970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.748997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.749138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.749167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.749248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.749275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.749368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.749395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.749481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.749507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.749586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.749613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.749744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.749785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.749917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.749946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.750052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.750079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.750236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.750291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.750509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.750556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.750657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.750684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.750778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.750805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.750955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.750984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.751066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.751094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.751246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.751273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.751491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.751545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.751660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.751689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.751795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.751844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.751946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.751974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.752114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.752140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.752277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.752334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.752512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.752566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.752677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.752702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.752819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.752854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.752940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.752967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.753070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.787 [2024-11-06 09:05:09.753097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.787 qpair failed and we were unable to recover it. 00:28:56.787 [2024-11-06 09:05:09.753183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.753208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.753312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.753338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.753419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.753443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.753537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.753564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.753705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.753731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.753851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.753878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.753995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.754021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.754158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.754184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.754296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.754322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.754452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.754478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.754562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.754587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.754674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.754700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.754813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.754843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.754938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.754964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.755086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.755111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.755195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.755221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.755303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.755329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.755414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.755444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.755536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.755564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.755644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.755669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.755746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.755773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.755882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.755908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.755994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.756020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.756111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.756138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.756257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.756284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.756377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.756404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.756542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.756569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.756652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.756677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.756802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.756829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.756967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.788 [2024-11-06 09:05:09.756992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.788 qpair failed and we were unable to recover it. 00:28:56.788 [2024-11-06 09:05:09.757085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.757111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.757196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.757223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.757306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.757333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.757421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.757447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.757541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.757568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.757675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.757701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.757819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.757852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.757941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.757966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.758077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.758103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.758217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.758243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.758324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.758349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.758443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.758468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.758555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.758581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.758662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.758689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.758845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.758876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.758975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.759000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.759091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.759131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.759256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.759285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.759377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.759406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.759521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.759548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.759657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.759684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.759776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.759803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.759926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.759954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.760043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.760070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.760176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.760204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.760285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.760310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.760427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.760460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.760567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.760594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.760714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.760742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.760823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.760853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.760950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.760976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.761064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.789 [2024-11-06 09:05:09.761090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.789 qpair failed and we were unable to recover it. 00:28:56.789 [2024-11-06 09:05:09.761174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.761201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.761340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.761366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.761480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.761507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.761616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.761642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.761756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.761783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.761870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.761895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.762003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.762029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.762140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.762166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.762284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.762310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.762403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.762434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.762525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.762550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.762655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.762680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.762761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.762788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.762901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.762927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.763011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.763038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.763189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.763215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.763307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.763334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.763412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.763437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.763527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.763553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.763670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.763695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.763808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.763839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.763928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.763954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.764036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.764063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.764183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.764208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.764293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.764320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.764414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.764440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.764554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.764581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.764676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.764701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.764799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.764846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.764950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.764981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.765068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.765095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.765178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.765207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.765385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.765439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.765545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.765572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.790 qpair failed and we were unable to recover it. 00:28:56.790 [2024-11-06 09:05:09.765709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.790 [2024-11-06 09:05:09.765736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.765814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.765846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.765932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.765965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.766061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.766088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.766196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.766223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.766329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.766356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.766435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.766462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.766577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.766604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.766720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.766748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.766854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.766881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.766994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.767021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.767135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.767162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.767307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.767333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.767420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.767447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.767526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.767553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.767631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.767658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.767750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.767776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.767887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.767915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.768005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.768033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.768109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.768135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.768222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.768248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.768326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.768352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.768470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.768497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.768582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.768608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.768684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.768710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.768798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.768824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.768935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.768960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.769071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.769096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.769202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.769228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.769311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.769338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.769427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.769453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.769528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.769552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.769665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.769691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.769773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.769799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.791 [2024-11-06 09:05:09.769917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.791 [2024-11-06 09:05:09.769947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.791 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.770061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.770101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.770226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.770257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.770371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.770398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.770487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.770515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.770627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.770655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.770771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.770798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.770890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.770917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.771003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.771032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.771152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.771179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.771269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.771296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.771382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.771409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.771526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.771554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.771672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.771701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.771849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.771877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.772013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.772040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.772156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.772183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.772294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.772322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.772409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.772436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.772551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.772578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.772660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.772687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.772802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.772830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.772960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.772986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.773075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.773100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.773182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.773207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.773318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.773345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.773455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.773480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.773567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.773593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.773679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.773705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.773787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.773813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.773904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.773930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.774046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.774071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.774184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.774210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.774297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.792 [2024-11-06 09:05:09.774324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.792 qpair failed and we were unable to recover it. 00:28:56.792 [2024-11-06 09:05:09.774463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.774489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.774600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.774630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.774718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.774743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.774857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.774884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.774987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.775013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.775141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.775167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.775249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.775274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.775358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.775385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.775459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.775485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.775568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.775594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.775680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.775706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.775795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.775820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.775944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.775972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.776050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.776075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.776161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.776186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.776341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.776367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.776455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.776481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.776577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.776603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.776745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.776772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.776851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.776876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.776997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.777023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.777140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.777165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.777251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.777277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.777365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.777391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.777481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.777508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.777586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.777611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.777688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.777713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.777820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.777854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.777937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.777963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.778075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.778101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.778217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.778243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.778353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.778378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.778483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.778508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.778597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.778622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.778708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.778733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.793 qpair failed and we were unable to recover it. 00:28:56.793 [2024-11-06 09:05:09.778815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.793 [2024-11-06 09:05:09.778848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.778963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.778989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.779075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.779101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.779214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.779239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.779320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.779344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.779461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.779487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.779602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.779627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.779719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.779760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.779859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.779890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.780010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.780038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.780153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.780180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.780289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.780315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.780454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.780482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.780566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.780594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.780685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.780712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.780821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.780857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.780941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.780966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.781056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.781082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.781161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.781187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.781344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.781400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.781505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.781530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.781649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.781675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.781789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.781815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.781904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.781931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.782014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.782041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.782135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.782162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.782287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.782315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.782438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.782466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.782557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.782583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.782700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.782727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.782810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.782844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.782931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.782959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.783048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.783073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.794 qpair failed and we were unable to recover it. 00:28:56.794 [2024-11-06 09:05:09.783187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.794 [2024-11-06 09:05:09.783214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.783320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.783360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.783484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.783511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.783596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.783622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.783719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.783746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.783856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.783884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.783976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.784003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.784081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.784106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.784195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.784222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.784308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.784335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.784590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.784654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.784820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.784856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.784938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.784966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.785056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.785083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.785241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.785320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.785582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.785648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.785913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.785941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.786044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.786071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.786156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.786182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.786342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.786407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.786653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.786719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.786917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.786944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.787026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.787053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.787260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.787325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.787624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.787694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.787893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.787920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.788035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.788062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.788148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.788220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.788444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.788511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.788685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.788712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.788803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.788830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.788928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.788955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.789037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.789065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.789177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.789203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.789386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.789452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.789641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.789708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.789929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.795 [2024-11-06 09:05:09.789956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.795 qpair failed and we were unable to recover it. 00:28:56.795 [2024-11-06 09:05:09.790076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.790103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.790216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.790242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.790331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.790368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.790521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.790586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.790777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.790817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.790913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.790942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.791063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.791090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.791199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.791271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.791463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.791489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.791702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.791757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.791905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.791932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.792020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.792046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.792127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.792151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.792229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.792255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.792393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.792418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.792505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.792530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.792608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.792635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.792790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.792842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.792940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.792969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.793086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.793115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.793231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.793258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.793381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.793438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.793552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.793610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.793731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.793758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.793866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.793894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.793983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.794010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.794127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.794153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.794238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.794266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.794384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.794412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.794500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.794527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.794667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.794694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.794793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.794821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.794977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.795005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.795086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.795112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.795201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.795228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.795343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.795370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.795585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.796 [2024-11-06 09:05:09.795651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.796 qpair failed and we were unable to recover it. 00:28:56.796 [2024-11-06 09:05:09.795872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.795900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.795986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.796013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.796108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.796186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.796369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.796435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.796627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.796691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.796912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.796939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.797075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.797102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.797266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.797340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.797580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.797645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.797896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.797924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.798006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.798032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.798122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.798148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.798330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.798395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.798641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.798706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.798960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.798987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.799093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.799153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.799334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.799394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.799642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.799708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.799952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.799980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.800073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.800099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.800286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.800312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.800456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.800502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.800732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.800758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.800871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.800897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.801010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.801036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.801134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.801160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.801278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.801305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.801388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.801447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.801610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.801677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.801887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.801915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.801995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.802020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.802130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.802156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.802240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.802267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.802376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.802402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.802575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.802640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.797 qpair failed and we were unable to recover it. 00:28:56.797 [2024-11-06 09:05:09.802801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.797 [2024-11-06 09:05:09.802828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.802933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.802960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.803056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.803082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.803223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.803287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.803568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.803633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.803826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.803866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.803960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.803987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.804115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.804142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.804342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.804369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.804552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.804578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.804689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.804715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.804914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.804952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.805070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.805100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.805263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.805327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.805592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.805656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.805863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.805915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.806053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.806079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.806198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.806257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.806487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.806553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.806792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.806818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.806934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.806962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.807080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.807107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.807255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.807319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.807602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.807676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.807883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.807910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.808023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.808050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.808172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.808198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.808386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.808414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.808521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.808548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.808635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.808660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.808797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.808824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.809009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.809075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.809306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.809371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.809654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.809719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.810024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.810089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.810384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.798 [2024-11-06 09:05:09.810449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.798 qpair failed and we were unable to recover it. 00:28:56.798 [2024-11-06 09:05:09.810717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.810782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.811058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.811123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.811417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.811481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.811698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.811763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.812063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.812130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.812351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.812415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.812653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.812718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.812977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.813046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.813316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.813381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.813680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.813746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.813962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.814028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.814319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.814384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.814643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.814708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.814901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.814967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.815198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.815263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.815479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.815546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.815762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.815880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.816173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.816239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.816484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.816549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.816795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.816881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.817114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.817179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.817417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.817483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.817710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.817774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.818042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.818107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.818385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.818450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.818678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.818743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.819060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.819124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.819370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.819437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.819689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.819754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.820011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.820079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.820307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.820372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.820655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.820721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.820997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.821063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.821255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.821319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.821535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.821601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.821846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.821915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.822136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.822200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.822459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.822524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.822811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.822894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.799 qpair failed and we were unable to recover it. 00:28:56.799 [2024-11-06 09:05:09.823145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.799 [2024-11-06 09:05:09.823210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.823465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.823528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.823823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.823940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.824202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.824268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.824566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.824631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.824851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.824918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.825167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.825233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.825527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.825591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.825864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.825934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.826206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.826272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.826523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.826588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.826881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.826947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.827254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.827325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.827533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.827599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.827825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.827903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.828135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.828199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.828445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.828511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.828719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.828795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.829121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.829196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.829454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.829519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.829750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.829815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.830033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.830098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.830381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.830457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.830707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.830771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.831008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.831073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.831265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.831330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.831566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.831632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.831894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.831961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.832257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.832323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.832587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.832651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.832910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.832983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.833283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.833349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.833643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.833707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.834006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.834072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.834336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.834400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.834687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.834752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.835053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.835120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.835329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.835400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.800 [2024-11-06 09:05:09.835628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.800 [2024-11-06 09:05:09.835695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.800 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.835915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.835993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.836277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.836343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.836623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.836688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.836939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.837005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.837250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.837315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.837604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.837678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.837886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.837954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.838202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.838268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.838519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.838587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.838851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.838917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.839172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.839237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.839456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.839519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.839807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.839894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.840148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.840213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.840498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.840561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.840841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.840911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.841201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.841266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.841487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.841550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.841800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.841901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.842163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.842227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.842482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.842547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.842764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.842853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.843101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.843167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.843415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.843478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.843740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.843803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.844093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.844157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.844404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.844478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.844740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.844805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.845077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.845141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.845424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.845488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.845672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.845736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.845989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.846056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.846371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.846436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.846737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.846802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.847077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.847141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.847397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.847460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.847712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.847779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.848015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.848081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.848336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.848400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.801 [2024-11-06 09:05:09.848659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.801 [2024-11-06 09:05:09.848722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.801 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.848970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.849035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.849234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.849298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.849505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.849570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.849786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.849867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.850162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.850227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.850462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.850527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.850769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.850850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.851073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.851141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.851354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.851419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.851682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.851747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.851969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.852038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.852297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.852360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.852604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.852669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.852966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.853033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.853263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.853327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.853615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.853678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.853961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.854027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.854321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.854385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.854680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.854753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.854985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.855050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.855239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.855303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.855489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.855552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.855859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.855925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.856210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.856275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.856481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.856545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.856787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.856863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.857148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.857214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.857504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.857567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.857824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.857900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.858075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.858140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.858393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.858457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.858706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.858770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.859045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.859111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.859414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.859479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.859740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.859805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.860124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.860189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.860441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.860505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.860754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.860819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.861074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.861138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.861391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.861455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.861700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.802 [2024-11-06 09:05:09.861766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.802 qpair failed and we were unable to recover it. 00:28:56.802 [2024-11-06 09:05:09.862069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.862133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.862339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.862406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.862598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.862663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.862913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.862981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.863286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.863352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.863589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.863653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.863894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.863961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.864230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.864296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.864591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.864655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.864914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.864979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.865168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.865233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.865461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.865524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.865753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.865818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.866088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.866152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.866438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.866501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.866758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.866822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.867065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.867131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.867421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.867495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.867790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.867881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.868099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.868163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.868433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.868497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.868757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.868821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.869075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.869139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.869428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.869493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.869732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.869798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.870070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.870134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.870396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.870460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.870751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.870815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.871079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.871147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.871394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.871457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.871646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.871710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.871957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.872024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.872321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.872386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.872627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.872691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.872940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.873006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.873259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.873323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.873615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.873680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.803 [2024-11-06 09:05:09.873943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.803 [2024-11-06 09:05:09.874009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.803 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.874305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.874369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.874668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.874730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.875019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.875085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.875318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.875383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.875669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.875733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.876039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.876104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.876404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.876469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.876714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.876777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.877096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.877161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.877429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.877492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.877776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.877854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.878123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.878188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.878450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.878513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.878711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.878778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.879011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.879078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.879356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.879420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.879703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.879769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.879997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.880063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.880251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.880314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.880523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.880597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.880883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.880951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.881198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.881261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.881543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.881607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.881862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.881927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.882168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.882232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.882485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.882548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.882732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.882798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.883085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.883149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.883363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.883427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.883670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.883735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.883963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.884029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.884317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.884381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.884682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.884747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.885031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.885096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.885311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.885374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.885612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.885676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.885884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.885949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.886131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.886194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.886420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.886486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.886770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.886846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.887139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.887203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.804 qpair failed and we were unable to recover it. 00:28:56.804 [2024-11-06 09:05:09.887492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.804 [2024-11-06 09:05:09.887557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.887800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.887879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.888098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.888165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.888374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.888440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.888727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.888791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.889149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1861f30 is same with the state(6) to be set 00:28:56.805 [2024-11-06 09:05:09.889538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.889639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.889919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.889990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.890284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.890352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.890612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.890678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.890943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.891013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.891306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.891373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.891624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.891690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.891947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.892017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.892275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.892346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.892633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.892698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.892948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.893017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.893287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.893354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.893561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.893625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.893873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.893944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.894170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.894238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.894542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.894607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.894871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.894938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.895194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.895261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.895494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.895558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.895785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.895864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.896082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.896148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.896326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.896391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.896638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.896703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.896969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.897036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.897331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.897396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.897686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.897751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.898053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.898139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.898343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.898408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.898628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.898695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.898942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.899008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.899295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.899360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.899609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.899676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.899957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.900022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.900275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.900340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.900589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.900657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.900927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.805 [2024-11-06 09:05:09.900993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.805 qpair failed and we were unable to recover it. 00:28:56.805 [2024-11-06 09:05:09.901280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.901345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.901595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.901665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.901910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.901979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.902275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.902340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.902574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.902640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.902906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.902972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.903261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.903327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.903624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.903692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.903946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.904012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.904201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.904270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.904499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.904565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.904815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.904895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.905193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.905260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.905545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.905611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.905917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.905983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.906216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.906282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.906565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.906631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.906880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.906947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.907185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.907251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.907544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.907611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.907888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.907954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.908198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.908266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.908558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.908624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.908828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.908947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.909254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.909319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.909571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.909636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.909893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.909960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.910220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.910284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.910506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.910572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.910864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.910934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.911136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.911211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.911435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.911500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.911703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.911770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.912008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.912074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.912360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.912425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.912676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.912744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.913018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.913084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.913352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.913417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.913675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.913745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.914016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.914084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.914341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.914407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.914649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.914714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.806 qpair failed and we were unable to recover it. 00:28:56.806 [2024-11-06 09:05:09.914955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.806 [2024-11-06 09:05:09.915022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.915317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.915382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.915641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.915708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.915936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.916006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.916266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.916331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.916611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.916676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.916975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.917042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.917251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.917318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.917602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.917667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.917928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.917996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.918289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.918354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.918642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.918707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.918988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.919053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.919280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.919346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.919575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.919640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.919904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.919972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.920237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.920302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.920590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.920656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.920904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.920972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.921266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.921331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.921615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.921681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.921927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.921994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.922275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.922342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.922624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.922690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.922948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.923013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.923301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.923367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.923605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.923672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.923902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.923971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.924209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.924284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.924572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.924639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.924859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.924927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.925150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.925217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.925503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.925569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.925819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.925916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.926166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.926233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.926520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.926585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.926828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.926908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.927146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.927212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.927483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.927548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.927860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.927927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.928175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.807 [2024-11-06 09:05:09.928242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.807 qpair failed and we were unable to recover it. 00:28:56.807 [2024-11-06 09:05:09.928542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.928607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.928949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.929016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.929309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.929377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.929661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.929725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.930040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.930108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.930389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.930455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.930750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.930814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.931075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.931143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.931398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.931466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.931771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.931849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.932055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.932123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.932385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.932452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.932665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.932730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.932998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.933068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.933314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.933382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.933636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.933701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.933972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.934041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.934315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.934381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.934667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.934733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.935004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.935071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.935310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.935376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.935659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.935723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.936020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.936088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.936339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.936405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.936652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.936717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.936971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.937038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.937270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.937336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.937585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.937660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.937958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.938026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.938332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.938398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.938712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.938780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.939078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.939145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.939338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.939404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.939651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.939719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.939990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.940058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.940285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.940351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.940595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.940661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.940871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.940937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.941182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.941250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.941541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.941607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.941829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.941913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.942178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.942243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.942486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.942554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.942797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.808 [2024-11-06 09:05:09.942883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.808 qpair failed and we were unable to recover it. 00:28:56.808 [2024-11-06 09:05:09.943176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.943241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.943460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.943526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.943730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.943798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.944065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.944133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.944379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.944444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.944743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.944809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.945128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.945192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.945449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.945514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.945778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.945858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.946130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.946194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.946487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.946554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.946887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.946953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.947203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.947270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.947535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.947602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.947895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.947963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.948216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.948282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.948528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.948597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.948830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.948910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.949191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.949258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.949477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.949542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.949762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.949828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.950123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.950188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.950471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.950538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.950797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.950886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.951187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.951253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.951491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.951558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.951858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.951926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.952169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.952236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.952528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.952593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.952847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.952914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.953118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.953185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.953428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.953496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.953746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.953811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.954113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.954178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.954462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.954528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.954777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.954875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.955140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.955206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.955463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.955530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.955821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.955908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.956170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.956237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.956520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.956584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.956847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.956915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.957154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.957220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.957506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.809 [2024-11-06 09:05:09.957570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.809 qpair failed and we were unable to recover it. 00:28:56.809 [2024-11-06 09:05:09.957828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.957909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.958122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.958188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.958434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.958499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.958720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.958786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.959056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.959121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.959407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.959471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.959694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.959761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.959986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.960056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.960316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.960381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.960644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.960713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.960961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.961029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.961237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.961301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.961554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.961619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.961904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.961972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.962225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.962290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.962571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.962635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.962879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.962947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.963177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.963242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.963484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.963550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.963794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.963891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.964149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.964214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.964448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.964512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.964745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.964810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.965107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.965173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.965421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.965486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.965743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.965808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.966079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.966145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.966423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.966488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.966730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.966795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.967099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.967165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.967363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.967430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.967659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.967724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.967982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.968050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.968281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.968349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.968603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.968670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.968954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.969021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.969275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.969341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.969602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.969667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.969861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.969928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.970136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.970203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.970492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.970557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.970779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.970875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.971161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.971227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.971511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.971576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.971865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.971932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.972211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.972275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.972575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.810 [2024-11-06 09:05:09.972639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.810 qpair failed and we were unable to recover it. 00:28:56.810 [2024-11-06 09:05:09.972889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.972957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.973243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.973309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.973556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.973621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.973912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.973987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.974203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.974271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.974507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.974573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.974859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.974927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.975146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.975214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.975470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.975535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.975818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.975896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.976196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.976261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.976515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.976580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.976849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.976937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.977158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.977227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.977508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.977573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.977847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.977914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.978112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.978179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.978380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.978445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.978693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.978757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.979066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.979133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.979425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.979491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.979729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.979796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.980023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.980088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.980329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.980394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.980584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.980650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.980901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.980967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.981221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.981287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.981505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.981574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.981823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.981900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.982178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.982245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.982538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.982604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.982898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.982963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.983253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.983318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.983547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.983613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.983871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.983939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.984133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.984198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.984470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.984536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.984733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.984802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.985040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.985105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.985399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.985474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.985761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.985826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.986098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.986163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.811 qpair failed and we were unable to recover it. 00:28:56.811 [2024-11-06 09:05:09.986446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.811 [2024-11-06 09:05:09.986512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.986764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.986829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.987108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.987174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.987421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.987489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.987774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.987857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.988117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.988183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.988433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.988499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.988708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.988774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.989026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.989092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.989345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.989410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.989644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.989709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.990015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.990083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.990338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.990405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.990654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.990721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.991002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.991069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.991339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.991405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.991634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.991698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.991980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.992049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.992343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.992408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.992654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.992719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.992966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.993033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.993292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.993359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.993610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.993674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.993927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.993994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.994300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.994366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.994658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.994722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.994990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.995056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.995306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.995371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.995590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.995660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.995888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.995956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.996254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.996321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.996552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.996617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.996916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.996983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.997251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.997316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.997563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.997631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.997925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.997992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.998288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.998352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.998566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.998642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.998874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.998942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.999226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.999290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.999529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.999594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:09.999858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:09.999925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:10.000114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:10.000180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:10.000428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:10.000498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:10.000730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:10.000791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:10.001048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:10.001108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:10.001353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:10.001433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:10.001673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:10.001740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:10.001975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:10.002042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.812 [2024-11-06 09:05:10.002288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.812 [2024-11-06 09:05:10.002354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.812 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.002587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.002654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.002956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.003024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.003289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.003357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.003593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.003663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.003868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.003939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.004180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.004246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.004495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.004561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.004804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.004892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.005105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.005173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.005435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.005499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.005753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.005818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.006053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.006129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.006414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.006477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.006757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.006823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.007148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.007225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.007465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.007530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.007818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.007913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.008168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.008234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.008427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.008491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.008732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.008801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.009092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.009158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.009441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.009507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.009737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.009805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.010006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.010040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.010199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.010231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.010363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.010396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.010522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.010553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.010686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.010724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.010898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.010931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.011058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.011090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.011219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.011251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.011357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.011388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.011514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.011547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.011653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.011684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.011794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.011826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.011979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.012010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.012119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.012151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.012283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.012314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.012414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.012444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.012551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.012584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.012747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.012778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.012896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.012926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.013058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.013086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.013236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.013263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.013375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.013402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.013522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.013548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.013672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.013700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.013815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.013846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.013958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.013999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.813 [2024-11-06 09:05:10.014124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.813 [2024-11-06 09:05:10.014153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.813 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.014274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.014300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.014412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.014439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.014561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.014589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.014698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.014723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.014845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.014885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.014965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.014993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.015076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.015106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.015214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.015240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.015362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.015389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.015475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.015502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.015595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.015623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.015736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.015764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.015891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.015919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.016063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.016101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.016218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.016245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.016387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.016413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.016502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.016530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.016609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.016642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.016784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.016810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.016939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.016967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.017050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.017090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.017204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.017232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.017313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.017339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.017455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.017483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.017557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.017583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.017701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.017728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.017907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.017936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.018047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.018074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.018205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.018271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.018522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.018587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.018823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.018891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.019021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.019048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.019165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.019196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.019392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.019422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.019540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.019570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.019680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.019711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.019820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.019901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.020026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.020065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.020216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.020260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.020414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.020454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.020586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.020625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.020745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.020789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.020922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.020959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.021076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.021106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.021201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.021228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.021310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.814 [2024-11-06 09:05:10.021346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.814 qpair failed and we were unable to recover it. 00:28:56.814 [2024-11-06 09:05:10.021663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.021728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.021939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.021965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.022054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.022080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.022181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.022208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.022292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.022318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.022491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.022517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.022759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.022825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.022965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.022992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.023137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.023183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.023409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.023489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.023700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.023731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.023823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.023887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.024006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.024033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.024151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.024177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.024363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.024418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.024610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.024641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.024768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.024797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.024945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.024972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.025054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.025080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.025169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.025196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.025309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.025375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.025579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.025609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.025703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.025734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.025846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.025892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.026013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.026039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.026159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.026186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.026367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.026398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.026596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.026625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.026736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.026766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.026883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.026911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.026991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.027017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.027119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.027146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.027312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.027342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.027441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.027471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.027625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.027654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.027753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.027784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.027929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.027957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.028045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.028071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.028169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.028200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.028300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.028331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.028422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.028453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.028549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.028579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.028682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.028712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.028811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.028859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.028953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.028979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.029065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.029091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.029186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.029213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.029337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.029368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.029458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.029488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.029598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.815 [2024-11-06 09:05:10.029628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.815 qpair failed and we were unable to recover it. 00:28:56.815 [2024-11-06 09:05:10.029742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.029788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.029913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.029948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.030061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.030088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.030203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.030233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.030333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.030363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.030466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.030495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.030625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.030652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.030796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.030826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.030939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.030966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.031056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.031082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.031213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.031242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.031355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.031385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.031489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.031522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.031657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.031689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.031820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.031859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.031960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.031990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.032089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.032120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.032220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.032250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.032353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.032384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.032488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.032520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.032625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.032655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.032756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.032788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.032907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.032938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.033033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.033063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.033162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.033191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.033282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.033311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.033432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.033461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.033591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.033621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.033720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.033751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.033851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.033882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.033992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.034021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.034119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.034149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.034270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.034300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.034402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.034434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.034542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.034573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.034668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.034698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.034819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.034857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.034953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.034984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.035109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.035139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.035235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.035266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.035391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.035421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.035511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.035547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.035647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.035678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.035801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.035837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.035928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.035959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.036065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.036098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.036233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.036263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.036387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.036417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.036524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.036556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.036654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.036684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.816 [2024-11-06 09:05:10.036786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.816 [2024-11-06 09:05:10.036816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.816 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.036916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.036946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.037072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.037102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.037192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.037222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.037314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.037344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.037435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.037466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.037554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.037584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.037677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.037707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.037801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.037838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.037968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.037997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.038096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.038126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.038219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.038249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.038349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.038379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.038489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.038521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.038621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.038650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.038740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.038769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.038868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.038898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.039000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.039029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.039133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.039162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.039284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.039313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.039405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.039435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.039541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.039570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.039661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.039690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.039839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.039884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.039989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.040018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.040172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.040208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.040332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.040367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.040522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.040550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.040656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.040688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.040788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.040817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.040953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.040983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.041085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.041133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.041222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.041247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.041338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.041382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.041488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.041518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.041686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.041751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.041888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.041941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.042044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.042073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.042176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.042204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.042300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.042348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.042473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.042508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.042641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.042669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.042794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.042822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.042936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.042965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.043093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.043121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.043354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.043411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.043505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.043534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:56.817 [2024-11-06 09:05:10.043656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:56.817 [2024-11-06 09:05:10.043685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:56.817 qpair failed and we were unable to recover it. 00:28:57.100 [2024-11-06 09:05:10.043794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.100 [2024-11-06 09:05:10.043884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.100 qpair failed and we were unable to recover it. 00:28:57.100 [2024-11-06 09:05:10.044007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.100 [2024-11-06 09:05:10.044041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.044171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.044199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.044297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.044325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.044419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.044449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.044589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.044620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.044726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.044755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.044875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.044918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.045020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.045048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.045147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.045175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.045273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.045324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.045479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.045516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.045619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.045656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.045798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.045846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.045973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.046001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.046129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.046158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.046249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.046278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.046406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.046436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.046541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.046569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.046659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.046688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.046790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.046822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.046921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.046971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.047116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.047153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.047271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.047307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.047464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.047502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.047622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.047659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.047772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.047809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.047931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.047967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.048078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.048113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.048274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.048311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.048467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.048515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.048645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.048684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.048778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.048806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.048906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.048932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.049022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.101 [2024-11-06 09:05:10.049047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.101 qpair failed and we were unable to recover it. 00:28:57.101 [2024-11-06 09:05:10.049135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.049160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.049268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.049293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.049387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.049415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.049513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.049542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.049638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.049666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.049755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.049783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.049915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.049944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.050059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.050086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.050215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.050253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.050392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.050429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.050546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.050583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.050703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.050740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.050908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.050946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.051065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.051102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.051318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.051364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.051505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.051555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.051706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.051742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.051934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.051990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.052211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.052275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.052496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.052542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.052689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.052727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.052858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.052894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.053019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.053053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.053196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.053238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.053457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.053500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.053637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.053679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.053827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.053895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.054021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.054058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.054185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.054240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.054401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.054440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.054601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.054639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.054757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.054797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.055005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.055062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.055325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.102 [2024-11-06 09:05:10.055369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.102 qpair failed and we were unable to recover it. 00:28:57.102 [2024-11-06 09:05:10.055522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.055563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.055727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.055769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.055932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.055970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.056091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.056146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.056279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.056317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.056503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.056541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.056675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.056713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.056851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.056909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.057028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.057088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.057256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.057292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.057436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.057473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.057615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.057652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.057854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.057891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.058018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.058054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.058247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.058302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.058467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.058504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.058634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.058675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.058810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.058862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.058979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.059017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.059170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.059207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.059396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.059433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.059557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.059595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.059758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.059795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.059943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.059980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.060103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.060137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.060290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.060325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.060437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.060473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.060612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.060646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.060763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.060798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.060961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.060999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.061155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.061191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.061349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.061384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.061507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.061543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.061666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.061701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.103 [2024-11-06 09:05:10.061883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.103 [2024-11-06 09:05:10.061939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.103 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.062069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.062118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.062248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.062287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.062444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.062482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.062596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.062633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.062752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.062788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.062954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.062991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.063103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.063149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.063293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.063327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.063504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.063537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.063680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.063714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.063874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.063909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.064030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.064064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.064213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.064246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.064362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.064396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.064519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.064560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.064647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.064672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.064795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.064820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.064943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.064970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.065067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.065094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.065202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.065229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.065340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.065371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.065483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.065509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.065599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.065627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.065718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.065745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.065878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.065917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.066011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.066039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.066167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.066194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.066314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.066345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.066455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.066481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.066570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.066597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.066687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.066713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.066818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.066847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.066935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.066960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.067037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.104 [2024-11-06 09:05:10.067062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.104 qpair failed and we were unable to recover it. 00:28:57.104 [2024-11-06 09:05:10.067150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.067175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.067252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.067276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.067389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.067415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.067518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.067543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.067624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.067648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.067721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.067745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.067825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.067854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.067951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.067977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.068092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.068117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.068198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.068223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.068365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.068390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.068505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.068532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.068612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.068639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.068757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.068783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.068874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.068901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.068993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.069019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.069100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.069126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.069211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.069238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.069336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.069361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.069441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.069467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.069551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.069585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.069685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.069710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.069813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.069847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.069931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.069955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.070048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.070073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.070158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.070183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.070261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.070286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.070415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.070439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.070528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.070553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.070668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.070694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.070787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.070812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.070911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.105 [2024-11-06 09:05:10.070937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.105 qpair failed and we were unable to recover it. 00:28:57.105 [2024-11-06 09:05:10.071032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.071062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.071154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.071179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.071285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.071321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.071469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.071506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.072531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.072580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.072733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.072771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.072934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.072966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.073064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.073095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.073236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.073287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.073399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.073430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.073549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.073585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.073702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.073738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.073902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.074075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.074106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.074232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.074282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.074400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.074431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.074585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.074628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.074750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.074786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.074933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.074963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.075062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.075093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.075271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.075308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.075421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.075456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.075605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.075642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.075782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.075818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.075975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.076005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.076132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.076172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.076295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.076331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.076528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.076564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.076702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.076738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.076892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.076923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.106 [2024-11-06 09:05:10.077052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.106 [2024-11-06 09:05:10.077082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.106 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.077226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.077256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.077412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.077461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.077619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.077655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.077798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.077870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.078005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.078044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.078233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.078268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.078379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.078416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.078559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.078596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.078730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.078787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.078903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.078942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.079054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.079090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.079211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.079248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.079408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.079444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.079557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.079593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.079789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.079856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.080005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.080039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.080150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.080181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.080298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.080333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.080510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.080547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.080697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.080734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.080891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.080922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.081042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.081089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.081202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.081235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.081396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.081434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.081577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.081614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.081742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.081780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.081952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.081990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.082104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.082143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.082282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.082320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.082469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.082507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.082620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.082658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.082810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.082858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.107 qpair failed and we were unable to recover it. 00:28:57.107 [2024-11-06 09:05:10.083013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.107 [2024-11-06 09:05:10.083049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.083204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.083239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.083366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.083403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.083552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.083587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.083739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.083773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.083971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.084003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.084137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.084167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.084316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.084351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.084477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.084514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.084665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.084712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.084861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.084892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.085017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.085047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.085223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.085260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.085439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.085475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.085599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.085635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.085784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.085819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.086004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.086039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.086211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.086240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.086400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.086430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.086527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.086556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.086758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.086796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.086933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.086969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.087114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.087149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.087324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.087358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.087521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.087558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.087754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.087792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.087978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.088009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.088101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.088131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.088249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.088280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.088383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.088413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.088565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.088600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.088745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.088780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.088943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.088980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.089155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.089226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.108 [2024-11-06 09:05:10.089393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.108 [2024-11-06 09:05:10.089435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.108 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.089587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.089627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.089761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.089801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.089952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.089989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.090147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.090202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.090337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.090376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.090535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.090571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.090735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.090764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.090909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.090940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.091121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.091159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.091311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.091370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.091567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.091606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.091765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.091804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.091991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.092030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.092184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.092222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.092359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.092397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.092520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.092560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.092704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.092743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.092899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.092937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.093070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.093100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.093267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.093315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.093481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.093510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.093619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.093649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.093850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.093890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.094010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.094047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.094200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.094236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.094391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.094441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.094601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.094631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.094751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.094793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.094957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.094997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.095182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.095221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.095374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.095412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.095595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.095633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.095789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.095827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.096015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.109 [2024-11-06 09:05:10.096045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.109 qpair failed and we were unable to recover it. 00:28:57.109 [2024-11-06 09:05:10.096144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.096175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.096302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.096333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.096455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.096486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.096626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.096664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.096817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.096873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.097042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.097073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.097169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.097200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.097384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.097422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.097548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.097587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.097717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.097756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.097886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.097926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.098038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.098077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.098234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.098274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.098443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.098480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.098625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.098663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.098824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.098872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.099015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.099053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.099207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.099244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.099399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.099438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.099609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.099640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.099797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.099827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.100009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.100047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.100244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.100274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.100403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.100433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.100628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.100666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.100790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.100828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.100944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.100982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.101135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.101172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.101361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.101391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.101487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.101517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.101617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.101647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.101748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.101779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.101917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.101958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.102079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.110 [2024-11-06 09:05:10.102119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.110 qpair failed and we were unable to recover it. 00:28:57.110 [2024-11-06 09:05:10.102281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.102320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.102478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.102519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.102646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.102695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.102826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.102862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.102985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.103024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.103155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.103194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.103363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.103400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.103529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.103569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.103732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.103773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.103972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.104012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.104138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.104187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.104384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.104425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.104612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.104652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.104768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.104808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.104963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.105003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.105197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.105228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.105386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.105431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.105577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.105617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.105755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.105795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.105962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.106004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.106135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.106175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.106293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.106334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.106498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.106560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.106748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.106780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.106895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.106926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.107113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.107152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.107320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.107361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.107491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.107533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.107689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.107727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.111 [2024-11-06 09:05:10.107872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.111 [2024-11-06 09:05:10.107912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.111 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.108055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.108094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.108264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.108304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.108466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.108506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.108666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.108704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.108876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.108916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.109052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.109093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.109225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.109264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.109421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.109468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.109637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.109677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.109799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.109847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.109989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.110035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.110223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.110251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.110382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.110410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.110547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.110586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.110776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.110814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.110965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.111003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.111130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.111170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.111286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.111324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.111482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.111521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.111688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.111730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.111933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.111975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.112151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.112190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.112342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.112381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.112538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.112577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.112772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.112802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.112907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.112937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.113054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.113096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.113250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.113292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.113454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.113495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.113688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.113728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.113903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.113942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.114133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.114188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.114370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.114418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.114649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.112 [2024-11-06 09:05:10.114695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.112 qpair failed and we were unable to recover it. 00:28:57.112 [2024-11-06 09:05:10.114885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.114937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.115155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.115197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.115356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.115396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.115561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.115624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.115843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.115884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.116037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.116075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.116242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.116282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.116458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.116500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.116652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.116693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.116928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.116971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.117198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.117238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.117370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.117410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.117550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.117590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.117781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.117850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.118077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.118139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.118297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.118358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.118519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.118562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.118722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.118763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.118912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.118954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.119189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.119230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.119367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.119427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.119595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.119636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.119787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.119828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.120002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.120044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.120231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.120274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.120472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.120516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.120689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.120735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.120945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.120983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.121117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.121148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.121338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.121380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.121523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.121567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.121695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.121739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.121922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.121966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.122167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.113 [2024-11-06 09:05:10.122242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.113 qpair failed and we were unable to recover it. 00:28:57.113 [2024-11-06 09:05:10.122474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.122514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.122731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.122794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.123040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.123089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.123268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.123334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.123622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.123685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.123881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.123924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.124075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.124119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.124289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.124331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.124523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.124568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.124753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.124798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.124998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.125045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.125219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.125250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.125335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.125366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.125535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.125580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.125747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.125792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.125995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.126043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.126227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.126276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.126485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.126533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.126711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.126757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.126924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.126985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.127141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.127183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.127322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.127362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.127570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.127610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.127770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.127811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.128013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.128058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.128224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.128282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.128413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.128454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.128621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.128678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.128912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.128953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.129137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.129178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.129351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.129397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.129561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.129592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.129691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.129723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.129878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.129925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.114 qpair failed and we were unable to recover it. 00:28:57.114 [2024-11-06 09:05:10.130070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.114 [2024-11-06 09:05:10.130111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.130233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.130275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.130483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.130514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.130641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.130672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.130781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.130811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.130955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.130985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.131124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.131169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.131330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.131375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.131571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.131611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.131744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.131784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.131978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.132026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.132241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.132286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.132428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.132473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.132614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.132660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.132856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.132904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.133125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.133166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.133319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.133351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.133459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.133490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.133646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.133677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.133903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.133950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.134130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.134176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.134314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.134366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.134597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.134638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.134773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.134815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.135019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.135065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.135209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.135253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.135429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.135475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.135633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.135680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.135828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.135884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.136072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.136119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.136330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.136371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.136555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.136618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.136775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.136806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.136931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.115 [2024-11-06 09:05:10.136963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.115 qpair failed and we were unable to recover it. 00:28:57.115 [2024-11-06 09:05:10.137123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.137163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.137323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.137383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.137588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.137619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.137751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.137781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.137954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.138000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.138172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.138221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.138421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.138452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.138576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.138606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.138705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.138737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.138919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.138969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.139126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.139176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.139400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.139449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.139624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.139688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.139863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.139912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.140091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.140140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.140323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.140372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.140531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.140581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.140732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.140781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.140985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.141034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.141209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.141259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.141441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.141495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.141600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.141631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.141733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.141764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.141859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.141892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.142017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.142049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.142207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.142248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.116 [2024-11-06 09:05:10.142450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.116 [2024-11-06 09:05:10.142498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.116 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.142703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.142751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.142956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.143008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.143223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.143254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.143347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.143378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.143486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.143517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.143630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.143672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.143850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.143882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.144010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.144042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.144201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.144241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.144423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.144463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.144689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.144719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.144867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.144899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.145027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.145058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.145269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.145309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.145475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.145516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.145660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.145701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.145907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.145956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.146163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.146210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.146361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.146419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.146643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.146674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.146809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.146846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.147054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.147105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.147296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.147345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.147534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.147587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.147742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.147789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.147964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.148013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.148157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.148205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.148394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.117 [2024-11-06 09:05:10.148443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.117 qpair failed and we were unable to recover it. 00:28:57.117 [2024-11-06 09:05:10.148629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.148680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.148856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.148905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.149049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.149099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.149314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.149355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.149522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.149586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.149819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.149888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.150015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.150073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.150267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.150308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.150480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.150520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.150764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.150794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.150963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.150994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.151164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.151203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.151367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.151416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.151629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.151669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.151816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.151886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.152084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.152133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.152303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.152350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.152521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.152593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.152806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.152850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.153013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.153063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.153220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.153271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.153488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.153529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.153697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.153736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.153891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.153932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.154059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.154100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.154265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.154305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.154460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.154508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.154701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.154748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.154912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.154945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.155041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.155075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.155190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.155239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.155420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.155468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.155661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.155701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.118 qpair failed and we were unable to recover it. 00:28:57.118 [2024-11-06 09:05:10.155870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.118 [2024-11-06 09:05:10.155931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.156088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.156138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.156332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.156373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.156548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.156579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.156676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.156706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.156845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.156895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.157114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.157145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.157273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.157304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.157435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.157466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.157651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.157692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.157828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.157878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.158066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.158120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.158364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.158404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.158555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.158595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.158723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.158766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.158918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.158960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.159141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.159190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.159376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.159425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.159565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.159640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.159876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.159927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.160126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.160176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.160405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.160454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.160646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.160696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.160913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.160954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.161114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.161186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.161392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.161440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.161683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.161724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.161866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.161909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.162078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.162119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.162317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.162366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.162583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.162632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.162826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.162884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.163041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.163129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.163320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.163379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.119 [2024-11-06 09:05:10.163551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.119 [2024-11-06 09:05:10.163592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.119 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.163797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.163858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.164067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.164116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.164313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.164353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.164514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.164578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.164798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.164865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.165046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.165097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.165270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.165323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.165516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.165567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.165774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.165825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.166061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.166109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.166255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.166302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.166492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.166533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.166724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.166785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.166966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.167015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.167161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.167209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.167386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.167434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.167712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.167777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.168018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.168067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.168274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.168321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.168536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.168584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.168733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.168781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.168993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.169041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.169263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.169303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.169445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.169488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.169650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.169711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.120 [2024-11-06 09:05:10.169920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.120 [2024-11-06 09:05:10.169970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.120 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.170161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.170210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.170365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.170414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.170571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.170666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.170860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.170916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.171126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.171180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.171390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.171443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.171611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.171664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.171855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.171908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.172060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.172113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.172269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.172328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.172475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.172516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.172683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.172750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.172962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.173025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.173220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.173281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.173430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.173471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.173695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.173746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.173906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.173959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.174125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.174178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.174403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.174468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.174665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.174716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.174898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.174951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.175120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.175171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.175357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.121 [2024-11-06 09:05:10.175408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.121 qpair failed and we were unable to recover it. 00:28:57.121 [2024-11-06 09:05:10.175601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.175662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.175865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.175917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.176117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.176174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.176349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.176402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.176619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.176670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.176826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.176889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.177069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.177135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.177410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.177450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.177611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.177695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.177897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.177953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.178153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.178193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.178324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.178364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.178566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.178617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.178823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.178888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.179054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.179105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.179308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.179361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.179553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.179604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.179777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.179828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.122 qpair failed and we were unable to recover it. 00:28:57.122 [2024-11-06 09:05:10.180031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.122 [2024-11-06 09:05:10.180081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.180285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.180337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.180489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.180580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.180761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.180812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.180983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.181035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.181228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.181292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.181437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.181488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.181707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.181758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.181929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.181983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.182128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.182168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.182307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.182350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.182609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.182668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.182873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.182929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.183173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.183224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.183433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.183486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.183701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.183753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.183986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.184038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.184286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.184350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.184512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.184562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.184792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.184855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.185056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.185116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.185303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.185364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.185533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.185587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.123 [2024-11-06 09:05:10.185745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.123 [2024-11-06 09:05:10.185800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.123 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.186035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.186087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.186291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.186343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.186499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.186556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.186711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.186762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.186932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.186995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.187240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.187291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.187465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.187518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.187730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.187771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.187918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.187960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.188131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.188190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.188373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.188426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.188594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.188655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.188859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.188913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.189075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.189127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.189363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.189413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.189605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.189671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.189863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.189904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.190048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.190089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.190252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.190343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.190572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.190630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.190857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.190910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.191092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.191145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.191310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.191370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.191541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.124 [2024-11-06 09:05:10.191593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.124 qpair failed and we were unable to recover it. 00:28:57.124 [2024-11-06 09:05:10.191761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.191815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.192001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.192062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.192271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.192321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.192506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.192557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.192790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.192860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.193065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.193116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.193307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.193358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.193522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.193573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.193781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.193848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.194032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.194086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.194289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.194340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.194553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.194596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.194765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.194851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.195080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.195143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.195363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.195424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.195673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.195733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.196027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.196100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.196313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.196391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.196619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.196678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.196978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.197057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.197253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.197313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.197484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.197541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.197715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.125 [2024-11-06 09:05:10.197774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.125 qpair failed and we were unable to recover it. 00:28:57.125 [2024-11-06 09:05:10.197929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.197998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.198190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.198242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.198447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.198489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.198702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.198758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.198988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.199044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.199251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.199306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.199552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.199607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.199792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.199861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.200115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.200169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.200424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.200479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.200689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.200767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.201049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.201109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.201286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.201338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.201552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.201593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.201754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.201796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.202020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.202073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.202280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.202331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.202489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.202540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.202749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.202809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.203077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.203129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.203337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.203395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.203684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.203744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.204012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.204082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.126 qpair failed and we were unable to recover it. 00:28:57.126 [2024-11-06 09:05:10.204350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.126 [2024-11-06 09:05:10.204391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.204555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.204595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.204786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.204876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.205075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.205116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.205295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.205351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.205568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.205625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.205872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.205914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.206079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.206120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.206258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.206315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.206467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.206523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.206692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.206752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.206937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.206978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.207114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.207177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.207399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.207455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.207641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.207695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.207929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.207984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.208176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.208231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.208449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.208509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.208753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.208812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.209157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.209213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.209409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.209464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.209700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.209758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.210025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.210080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.210290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.127 [2024-11-06 09:05:10.210347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.127 qpair failed and we were unable to recover it. 00:28:57.127 [2024-11-06 09:05:10.210563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.210618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.210828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.210921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.211173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.211228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.211474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.211529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.211748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.211818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.212071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.212127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.212281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.212336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.212592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.212645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.212846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.212909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.213124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.213179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.213402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.213456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.213703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.213758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.213992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.214048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.214235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.214289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.214546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.214600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.214848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.214922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.215088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.215145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.215393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.215448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.215712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.215766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.216031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.216086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.216270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.216326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.216558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.216612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.216780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.216859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.128 qpair failed and we were unable to recover it. 00:28:57.128 [2024-11-06 09:05:10.217112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.128 [2024-11-06 09:05:10.217168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.217366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.217420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.217630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.217685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.217936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.217993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.218239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.218294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.218500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.218555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.218784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.218849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.219105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.219161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.219382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.219439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.219664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.219718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.219962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.220018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.220236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.220290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.220476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.220530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.220728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.220782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.220991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.221050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.221243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.221319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.221545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.221600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.221851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.221925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.222181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.222239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.222501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.129 [2024-11-06 09:05:10.222560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.129 qpair failed and we were unable to recover it. 00:28:57.129 [2024-11-06 09:05:10.222846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.222908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.223140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.223210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.223416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.223476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.223654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.223714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.223965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.224025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.224229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.224289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.224511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.224572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.224781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.224866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.225063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.225124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.225332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.225392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.225653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.225711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.225936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.225998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.226268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.226328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.226572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.226631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.226905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.226965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.227212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.227274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.227538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.227597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.227788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.227873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.228103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.228161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.228432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.228490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.228761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.228820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.229032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.229090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.229322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.229382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.130 [2024-11-06 09:05:10.229563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.130 [2024-11-06 09:05:10.229622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.130 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.229895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.229955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.230175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.230237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.230430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.230489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.230701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.230760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.231007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.231067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.231335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.231394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.231659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.231718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.231948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.232011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.232230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.232290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.232518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.232577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.232809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.232903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.233126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.233185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.233408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.233468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.233696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.233755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.234003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.234066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.234292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.234351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.234552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.234613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.234877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.234946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.235118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.235178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.235350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.235409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.235633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.235691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.235982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.236062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.131 [2024-11-06 09:05:10.236370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.131 [2024-11-06 09:05:10.236451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.131 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.236663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.236721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.236925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.236985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.237188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.237250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.237509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.237569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.237803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.237875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.238075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.238135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.238369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.238427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.238642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.238702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.238972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.239033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.239231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.239291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.239524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.239583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.239764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.239826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.240097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.240157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.240368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.240428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.240685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.240743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.241072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.241153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.241392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.241470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.241731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.241790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.242060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.242142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.242429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.242506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.242711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.242773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.132 qpair failed and we were unable to recover it. 00:28:57.132 [2024-11-06 09:05:10.243048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.132 [2024-11-06 09:05:10.243126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.243343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.243422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.243647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.243707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.243972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.244051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.244297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.244377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.244613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.244675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.244893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.244976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.245230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.245308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.245576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.245635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.245913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.245992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.246292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.246368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.246596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.246654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.246881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.246942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.247199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.247268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.247513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.247572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.247854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.247915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.248169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.248247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.248474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.248552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.248824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.248918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.249164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.249224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.249482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.249558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.249800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.249879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.250124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.250202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.250446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.133 [2024-11-06 09:05:10.250523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.133 qpair failed and we were unable to recover it. 00:28:57.133 [2024-11-06 09:05:10.250781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.250855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.251157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.251234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.251523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.251600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.251792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.251870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.252132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.252211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.252496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.252574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.252808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.252904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.253151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.253228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.253456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.253534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.253735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.253797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.254002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.254063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.254356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.254433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.254664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.254723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.254973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.255052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.255299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.255359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.255582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.255644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.255868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.255928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.256118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.256207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.256445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.256504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.256757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.256816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.257092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.257172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.257475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.257535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.257771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.257828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.134 qpair failed and we were unable to recover it. 00:28:57.134 [2024-11-06 09:05:10.258048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.134 [2024-11-06 09:05:10.258125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.258337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.258420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.258691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.258751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.258972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.259035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.259286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.259361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.259544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.259606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.259869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.259940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.260150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.260227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.260483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.260562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.260757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.260817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.261103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.261181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.261382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.261461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.261727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.261786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.262094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.262171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.262457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.262535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.262740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.262799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.263072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.263148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.263424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.263501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.263705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.263763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.263998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.264078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.264335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.264414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.264615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.264677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.264938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.265018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.265280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.135 [2024-11-06 09:05:10.265358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.135 qpair failed and we were unable to recover it. 00:28:57.135 [2024-11-06 09:05:10.265570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.265629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.265849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.265909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.266143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.266204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.266453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.266531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.266758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.266818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.267046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.267124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.267387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.267464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.267705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.267765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.268064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.268143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.268421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.268481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.268747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.268806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.269099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.269192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.269416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.269493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.269729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.269786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.270088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.270165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.270425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.270504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.270781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.270856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.271165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.271243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.271530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.271609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.271888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.271951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.272215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.272294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.272565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.272643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.272941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.136 [2024-11-06 09:05:10.273035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.136 qpair failed and we were unable to recover it. 00:28:57.136 [2024-11-06 09:05:10.273310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.273389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.273658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.273718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.274026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.274103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.274367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.274444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.274678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.274737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.274937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.274998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.275231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.275311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.275581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.275657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.275872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.275934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.276176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.276253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.276500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.276585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.276883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.276945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.277253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.277331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.277601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.277682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.277958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.278021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.278234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.278311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.278615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.278701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.278955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.279036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.137 qpair failed and we were unable to recover it. 00:28:57.137 [2024-11-06 09:05:10.279224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.137 [2024-11-06 09:05:10.279285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.279548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.279625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.279871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.279935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.280208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.280269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.280531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.280607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.280857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.280917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.281163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.281223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.281474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.281553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.281794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.281870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.282175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.282258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.282506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.282586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.282811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.282898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.283210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.283286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.283542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.283630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.283927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.284006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.284254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.284313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.284539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.284598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.284882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.284942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.285151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.285229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.285469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.285529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.285760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.285821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.286058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.286146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.286408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.286484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.286713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.138 [2024-11-06 09:05:10.286776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.138 qpair failed and we were unable to recover it. 00:28:57.138 [2024-11-06 09:05:10.286997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.287058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.287304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.287382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.287606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.287664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.287930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.287991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.288190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.288248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.288520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.288590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.288729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.288763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.288937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.288971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.289111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.289144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.289282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.289316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.289458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.289491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.289638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.289672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.289819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.289861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.289978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.290013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.290160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.290196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.290372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.290420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.290548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.290580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.290761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.290794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.290964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.290996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.291142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.291175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.291339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.291407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.291639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.291698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.291932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.291965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.139 [2024-11-06 09:05:10.292109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.139 [2024-11-06 09:05:10.292141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.139 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.292295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.292345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.292547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.292607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.292797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.292893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.293014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.293046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.293154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.293187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.293321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.293356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.293495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.293543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.293754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.293789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.293944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.293976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.294086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.294134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.294252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.294286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.294388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.294420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.294541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.294574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.294714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.294746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.294882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.294913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.295078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.295110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.295212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.295246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.295384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.295416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.295521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.295555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.295700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.295734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.295984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.296018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.296156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.296188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.296322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.296354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.140 qpair failed and we were unable to recover it. 00:28:57.140 [2024-11-06 09:05:10.296486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.140 [2024-11-06 09:05:10.296519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.296656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.296690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.296813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.296853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.297023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.297057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.297198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.297249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.297386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.297419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.297574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.297607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.297776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.297809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.297995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.298027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.298143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.298176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.298344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.298377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.298634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.298668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.298908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.298942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.299044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.299077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.299192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.299225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.299372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.299406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.299510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.299543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.299647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.299685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.299867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.299900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.300002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.300034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.300193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.300228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.300342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.300405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.300653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.300726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.141 [2024-11-06 09:05:10.300952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.141 [2024-11-06 09:05:10.300985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.141 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.301092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.301140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.301252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.301285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.301463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.301496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.301599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.301642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.301750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.301783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.301945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.301978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.302088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.302131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.302336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.302369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.302513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.302550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.302684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.302717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.302837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.302887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.303035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.303066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.303204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.303241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.303388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.303422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.303596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.303629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.303839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.303889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.304029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.304061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.304216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.304249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.304391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.304426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.304606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.304640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.304802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.304843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.304977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.305010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.305157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.305190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.142 [2024-11-06 09:05:10.305316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.142 [2024-11-06 09:05:10.305349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.142 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.305458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.305493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.305634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.305667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.305764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.305798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.305940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.305973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.306084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.306116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.306238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.306270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.306421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.306455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.306573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.306605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.306709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.306743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.306898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.306937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.307077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.307109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.307209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.307241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.307351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.307385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.307520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.307554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.307697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.307732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.307881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.307914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.308020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.308052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.308167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.308200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.308301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.308333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.308434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.308466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.308609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.308642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.308785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.308818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.308973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.143 [2024-11-06 09:05:10.309005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.143 qpair failed and we were unable to recover it. 00:28:57.143 [2024-11-06 09:05:10.309101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.309134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.309241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.309273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.309399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.309433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.309572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.309607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.309734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.309769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.309892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.309926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.310090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.310125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.310295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.310329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.310506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.310543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.310724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.310757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.310869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.310904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.311043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.311078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.311198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.311232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.311378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.311413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.311579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.311612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.311780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.311813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.311927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.311960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.312100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.312134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.312245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.312278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.312446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.312479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.312589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.144 [2024-11-06 09:05:10.312624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.144 qpair failed and we were unable to recover it. 00:28:57.144 [2024-11-06 09:05:10.312765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.312798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.312941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.312975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.313105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.313139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.313246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.313279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.313411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.313445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.313567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.313606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.313746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.313779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.313896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.313930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.314067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.314102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.314243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.314276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.314419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.314480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.314639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.314690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.314910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.314945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.315076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.315109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.315253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.315286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.315415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.315448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.315588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.315623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.315733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.315767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.315882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.315917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.316092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.316126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.316303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.316336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.316472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.316507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.316647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.316680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.316814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.316855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.316966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.145 [2024-11-06 09:05:10.316999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.145 qpair failed and we were unable to recover it. 00:28:57.145 [2024-11-06 09:05:10.317143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.317178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.317298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.317331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.317479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.317513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.317681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.317715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.317863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.317897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.318033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.318068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.318209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.318244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.318360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.318393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.318575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.318608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.318776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.318810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.318957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.318992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.319118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.319151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.319304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.319337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.319485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.319519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.319658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.319692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.319853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.319887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.319991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.320026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.320206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.320240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.320393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.320426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.320537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.320571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.320713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.320752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.320909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.320943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.321084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.321118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.321267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.321300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.321440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.321472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.146 qpair failed and we were unable to recover it. 00:28:57.146 [2024-11-06 09:05:10.321588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.146 [2024-11-06 09:05:10.321622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.321718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.321752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.321882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.321915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.322022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.322056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.322167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.322200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.322322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.322355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.322470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.322503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.322610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.322643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.322815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.322872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.323014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.323048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.323191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.323226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.323333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.323366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.323479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.323512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.323658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.323692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.323796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.323828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.323949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.323983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.324120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.324162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.324276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.324309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.324447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.324480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.324590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.324624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.324759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.324793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.324952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.324988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.325136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.325169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.325272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.325305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.325437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.325470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.325573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.325607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.147 qpair failed and we were unable to recover it. 00:28:57.147 [2024-11-06 09:05:10.325755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.147 [2024-11-06 09:05:10.325788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.325968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.326001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.326100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.326133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.326268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.326302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.326470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.326504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.326611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.326644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.326779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.326812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.326957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.327007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.327256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.327322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.327524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.327580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.327753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.327802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.328037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.328071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.328216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.328249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.328355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.328390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.328529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.328563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.328744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.328792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.328992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.329062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.329303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.329337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.329486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.329520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.329755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.329804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.330018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.330085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.330310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.330377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.330615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.330664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.330884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.330933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.331123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.331157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.331303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.331337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.331440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.331474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.148 [2024-11-06 09:05:10.331641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.148 [2024-11-06 09:05:10.331700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.148 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.331856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.331898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.332103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.332153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.332356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.332405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.332632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.332688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.332887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.332923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.333037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.333072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.333278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.333312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.333456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.333490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.333650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.333684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.333808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.333865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.334059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.334107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.334292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.334352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.334531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.334564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.334677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.334711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.334928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.334999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.335202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.335270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.335461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.335494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.335623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.335656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.335854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.335904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.336108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.336174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.336325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.336372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.336573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.336631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.336817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.336876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.149 [2024-11-06 09:05:10.336987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.149 [2024-11-06 09:05:10.337020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.149 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.337123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.337157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.337386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.337420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.337533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.337569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.337734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.337767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.338011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.338061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.338269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.338304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.338432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.338465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.338599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.338633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.338778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.338812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.338951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.338984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.339227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.339295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.339538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.339587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.339813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.339882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.340076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.340144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.340333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.340411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.340619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.340655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.340777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.340811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.340958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.340993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.341227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.341295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.341530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.341596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.341755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.341803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.342035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.342102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.342269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.342338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.342535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.342603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.342852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.342901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.343104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.343175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.343448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.343481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.343612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.150 [2024-11-06 09:05:10.343645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.150 qpair failed and we were unable to recover it. 00:28:57.150 [2024-11-06 09:05:10.343790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.343823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.344015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.344083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.344228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.344275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.344423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.344481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.344655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.344703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.344857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.344906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.345105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.345163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.345324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.345371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.345552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.345600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.345789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.345869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.346054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.346101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.346345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.346412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.346714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.346748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.346920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.346954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.347161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.347236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.347489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.347530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.347638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.347672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.347850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.347900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.348067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.348117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.348336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.348371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.348479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.348513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.348697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.348746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.348923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.348974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.349181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.349238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.151 [2024-11-06 09:05:10.349383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.151 [2024-11-06 09:05:10.349430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.151 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.349580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.349629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.349821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.349885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.350048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.350097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.350340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.350389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.350555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.350605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.350794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.350875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.351130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.351197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.351408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.351473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.351663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.351697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.351874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.351908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.352094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.352170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.352391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.352439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.352609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.352656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.352886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.352936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.353088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.353144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.353360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.353393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.353505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.353539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.353722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.353756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.353900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.353934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.354066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.354099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.152 qpair failed and we were unable to recover it. 00:28:57.152 [2024-11-06 09:05:10.354235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.152 [2024-11-06 09:05:10.354283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.354479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.354526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.354713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.354761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.354935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.355014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.355245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.355283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.355426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.355460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.355617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.355672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.355869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.355904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.356128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.356161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.356261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.356295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.356407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.356442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.356669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.356704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.356889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.356940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.357129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.357179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.357408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.357455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.357684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.357733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.357910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.357984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.358185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.358238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.358356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.358389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.358603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.358652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.358872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.358921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.359137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.359203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.359349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.359404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.359586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.359635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.359815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.359892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.153 qpair failed and we were unable to recover it. 00:28:57.153 [2024-11-06 09:05:10.360111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.153 [2024-11-06 09:05:10.360175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.360345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.360394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.360589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.360638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.360844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.360895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.361097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.361166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.361373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.361407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.361559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.361592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.361784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.361851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.362048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.362098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.362299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.362348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.362513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.362561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.362731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.362779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.362965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.363013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.363253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.363287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.363393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.363426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.363564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.363612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.363791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.363850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.364080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.364148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.364395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.364462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.364615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.364672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.364821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.364894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.365088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.365136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.365319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.365386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.365615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.365663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.365857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.365907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.366152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.366186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.154 [2024-11-06 09:05:10.366344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.154 [2024-11-06 09:05:10.366402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.154 qpair failed and we were unable to recover it. 00:28:57.438 [2024-11-06 09:05:10.366634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-11-06 09:05:10.366683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-11-06 09:05:10.366942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-11-06 09:05:10.367010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-11-06 09:05:10.367229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-11-06 09:05:10.367296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-11-06 09:05:10.367464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-11-06 09:05:10.367527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-11-06 09:05:10.367694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-11-06 09:05:10.367738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-11-06 09:05:10.367891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-11-06 09:05:10.367948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-11-06 09:05:10.368136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-11-06 09:05:10.368215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.438 qpair failed and we were unable to recover it. 00:28:57.438 [2024-11-06 09:05:10.368373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.438 [2024-11-06 09:05:10.368423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.368607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.368665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.368861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.368910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.369057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.369105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.369279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.369328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.369504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.369552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.369735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.369768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.369893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.369927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.370056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.370103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.370255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.370305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.370511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.370545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.370680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.370717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.370884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.370933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.371122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.371157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.371334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.371367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.371570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.371620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.371762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.371810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.372020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.372090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.372311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.372397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.372571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.372622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.372813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.372886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.372998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.373030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.373216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.373287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.373488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.373523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.373692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.373725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.373938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.373994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.374217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.374265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.374462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.374496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.374610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.374643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.374759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.374793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.374983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.375033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.375207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.375272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.375481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.375530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.375700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.439 [2024-11-06 09:05:10.375750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.439 qpair failed and we were unable to recover it. 00:28:57.439 [2024-11-06 09:05:10.375999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.376052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.376222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.376257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.376427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.376486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.376665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.376713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.376901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.376951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.377102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.377150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.377324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.377371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.377518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.377565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.377786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.377842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.378089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.378124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.378256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.378290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.378497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.378544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.378699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.378749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.378933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.379007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.379254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.379319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.379517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.379566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.379797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.379873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.380091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.380158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.380415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.380482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.380677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.380727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.380988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.381056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.381316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.381366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.381526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.381576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.381766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.381814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.382047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.382097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.382261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.382311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.382472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.382522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.382703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.382751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.382978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.383046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.383269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.383336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.383534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.383582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.383774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.383830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.384010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.384076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.440 [2024-11-06 09:05:10.384326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.440 [2024-11-06 09:05:10.384392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.440 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.384629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.384677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.384886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.384937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.385119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.385167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.385335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.385387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.385580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.385629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.385842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.385891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.386076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.386124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.386337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.386405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.386640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.386688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.386884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.386933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.387084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.387131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.387331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.387381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.387571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.387622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.387867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.387915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.388127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.388194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.388445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.388511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.388690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.388737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.388991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.389058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.389320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.389386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.389608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.389656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.389864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.389899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.390065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.390099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.390281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.390347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.390545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.390579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.390694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.390730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.390898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.390968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.391192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.391258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.391487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.391536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.391781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.391814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.391964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.391998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.392219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.392285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.392520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.392567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.392708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.392756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.392959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.441 [2024-11-06 09:05:10.393030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.441 qpair failed and we were unable to recover it. 00:28:57.441 [2024-11-06 09:05:10.393263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.393297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.393417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.393451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.393595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.393643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.393855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.393913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.394117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.394193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.394421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.394469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.394670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.394704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.394850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.394884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.395073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.395121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.395352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.395385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.395534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.395589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.395787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.395847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.396044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.396109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.396342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.396391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.396576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.396626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.396794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.396866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.397058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.397132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.397383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.397450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.397674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.397707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.397819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.397871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.398011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.398044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.398249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.398319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.398544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.398592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.398751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.398800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.399044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.399094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.399285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.399333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.399528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.399576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.399811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.399873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.400056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.400123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.400311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.400361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.400563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.400612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.400817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.400878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.401074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.401139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.442 [2024-11-06 09:05:10.401337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.442 [2024-11-06 09:05:10.401389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.442 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.401579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.401628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.401860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.401910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.402157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.402206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.402411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.402479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.402707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.402755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.402986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.403053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.403295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.403361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.403539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.403588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.403731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.403780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.403970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.404038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.404265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.404335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.404567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.404614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.404760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.404810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.405070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.405137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.405313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.405382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.405538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.405592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.405812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.405857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.406002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.406061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.406277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.406343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.406513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.406564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.406724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.406772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.406972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.407020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.407218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.407286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.407520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.407575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.407752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.407800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.408014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.408081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.408347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.408414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.408653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.408700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.443 [2024-11-06 09:05:10.408884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.443 [2024-11-06 09:05:10.408957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.443 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.409181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.409247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.409506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.409539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.409652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.409684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.409826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.409871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.410090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.410159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.410462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.410532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.410683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.410736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.410937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.411014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.411200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.411274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.411443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.411492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.411667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.411715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.411876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.411927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.412157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.412205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.412386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.412439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.412641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.412691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.412893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.412928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.413074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.413126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.413319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.413368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.414735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.414800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.415036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.415088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.415220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.415246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.415370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.415397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.415483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.415510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.415593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.415620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.415730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.415758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.415853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.415883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.415978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.416004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.416142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.416168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.416245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.416270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.416367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.416397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.416527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.416555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.416656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.416682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.416796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.444 [2024-11-06 09:05:10.416822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.444 qpair failed and we were unable to recover it. 00:28:57.444 [2024-11-06 09:05:10.416957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.416984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.417095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.417121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.417210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.417236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.417330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.417357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.417448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.417475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.417564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.417590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.417666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.417693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.417786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.417814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.417915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.417941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.418026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.418053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.418132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.418158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.418246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.418272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.418410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.418436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.418524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.418549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.418656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.418692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.418773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.418809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.418931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.418972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.419129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.419178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.419268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.419297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.419407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.419434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.419508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.419534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.419636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.419665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.419782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.419809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.419904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.419931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.420040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.420066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.420186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.420212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.420371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.420397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.420495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.420522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.420613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.420639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.420760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.420788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.420888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.420916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.421085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.421137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.421334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.421386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.421593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.445 [2024-11-06 09:05:10.421647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.445 qpair failed and we were unable to recover it. 00:28:57.445 [2024-11-06 09:05:10.421803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.421850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.421969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.421995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.422168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.422231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.422477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.422528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.422737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.422792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.422972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.422999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.423134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.423160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.423283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.423360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.423582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.423636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.423796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.423823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.423913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.423940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.424048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.424074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.424219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.424273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.424532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.424584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.424778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.424850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.425003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.425028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.425161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.425253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.425485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.425538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.425806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.425870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.426006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.426031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.426141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.426173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.426348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.426398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.426554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.426614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.426811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.426892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.427002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.427028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.427183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.427248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.427441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.427483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.427750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.427802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.428027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.428053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.428268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.428323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.428633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.428685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.428861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.428906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.428997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.429022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.429122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.446 [2024-11-06 09:05:10.429176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.446 qpair failed and we were unable to recover it. 00:28:57.446 [2024-11-06 09:05:10.429383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.429434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.429722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.429776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.429994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.430021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.430106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.430155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.430364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.430415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.430724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.430777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.430957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.430997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.431100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.431169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.431326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.431392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.431603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.431676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.431884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.431911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.432025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.432052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.432165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.432191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.432410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.432464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.432713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.432765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.432963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.432990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.433103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.433168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.433327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.433388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.433591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.433641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.433822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.433890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.433985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.434011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.434090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.434116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.434245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.434293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.434490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.434542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.434745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.434770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.434868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.434895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.435010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.435044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.435169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.435219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.435424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.435475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.435630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.435685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.435882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.435928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.436047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.436073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.436163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.436228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.436423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.447 [2024-11-06 09:05:10.436474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.447 qpair failed and we were unable to recover it. 00:28:57.447 [2024-11-06 09:05:10.436669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.436715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.436851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.436877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.437018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.437044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.437224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.437272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.437457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.437505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.437718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.437782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.437990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.438017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.438126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.438152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.438270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.438318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.438457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.438504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.438653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.438700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.438855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.438881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.438976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.439001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.439160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.439209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.439345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.439393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.439537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.439586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.439723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.439750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.439896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.439936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.440075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.440132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.440305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.440373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.440575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.440626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.440853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.440900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.440987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.441013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.441142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.441174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.441352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.441403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.441616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.441665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.441827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.441896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.442006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.442032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.442146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.442172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.442329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.442377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.448 qpair failed and we were unable to recover it. 00:28:57.448 [2024-11-06 09:05:10.442565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.448 [2024-11-06 09:05:10.442612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.442781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.442806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.442920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.442950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.443039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.443065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.443189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.443260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.443427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.443487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.443662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.443689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.443803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.443829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.443974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.444028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.444108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.444134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.444250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.444277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.444398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.444425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.444510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.444537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.444620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.444646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.444777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.444803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.444899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.444925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.445016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.445042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.445163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.445190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.445272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.445299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.445383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.445409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.445496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.445522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.445640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.445667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.445760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.445787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.445898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.445925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.446033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.446059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.446154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.446181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.446323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.446349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.446438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.446465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.446612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.446640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.446747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.446777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.446900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.446928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.447069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.447118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.447346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.449 [2024-11-06 09:05:10.447394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.449 qpair failed and we were unable to recover it. 00:28:57.449 [2024-11-06 09:05:10.447550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.447611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.447819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.447854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.447944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.447971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.448140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.448199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.448366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.448422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.448569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.448622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.448731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.448757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.448841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.448867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.449019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.449070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.449192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.449245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.449386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.449438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.449567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.449606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.449689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.449717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.449843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.449869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.449957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.449983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.450068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.450094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.450183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.450208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.450341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.450391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.450578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.450629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.450780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.450829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.451048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.451103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.451239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.451301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.451451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.451499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.451592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.451618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.451727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.451753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.451860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.451888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.451977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.452003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.452113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.452139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.452248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.452274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.452366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.452393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.452507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.452534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.452634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.452673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.452767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.452796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.452891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.450 [2024-11-06 09:05:10.452919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.450 qpair failed and we were unable to recover it. 00:28:57.450 [2024-11-06 09:05:10.453038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.453080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.453217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.453268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.453494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.453547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.453774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.453825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.454008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.454034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.454120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.454152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.454309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.454355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.454496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.454566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.454772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.454822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.454960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.454987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.455083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.455123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.455347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.455424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.455632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.455687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.455910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.455938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.456057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.456126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.456309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.456393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.456578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.456649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.456807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.456849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.456942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.456970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.457160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.457223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.457371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.457422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.457588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.457647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.457735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.457761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.457913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.457963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.458125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.458188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.458302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.458328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.458445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.458477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.458620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.458646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.458755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.458782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.458893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.458921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.459000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.459026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.459136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.451 [2024-11-06 09:05:10.459162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.451 qpair failed and we were unable to recover it. 00:28:57.451 [2024-11-06 09:05:10.459273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.459299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.459418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.459444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.459538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.459564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.459666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.459706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.459855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.459882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.460002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.460027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.460141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.460167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.460252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.460277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.460394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.460418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.460532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.460576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.460662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.460699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.460795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.460820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.460964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.461010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.461184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.461228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.461390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.461438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.461659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.461687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.461800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.461826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.461986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.462038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.462188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.462238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.462345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.462397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.462538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.462592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.462698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.462724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.462813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.462856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.462952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.462978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.463092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.463118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.463236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.463262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.463347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.463372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.463446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.463472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.463565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.463604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.463726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.463753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.463872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.463901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.464017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.452 [2024-11-06 09:05:10.464042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.452 qpair failed and we were unable to recover it. 00:28:57.452 [2024-11-06 09:05:10.464151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.464176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.464291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.464325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.464458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.464512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.464625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.464651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.464748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.464774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.464919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.464976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.465124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.465170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.465300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.465325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.465439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.465463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.465575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.465600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.465686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.465710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.465785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.465809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.465929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.465954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.466068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.466120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.466286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.466329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.466495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.466536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.466704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.466730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.466820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.466854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.466963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.466988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.467111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.467154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.467366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.467409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.467585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.467627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.467753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.467783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.467918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.467944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.468025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.468050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.468142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.468167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.468255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.468279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.468397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.468439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.468644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.468686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.468858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.468911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.469002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.469033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.469119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.469153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.469369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.469434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.469587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.469631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.453 qpair failed and we were unable to recover it. 00:28:57.453 [2024-11-06 09:05:10.469796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.453 [2024-11-06 09:05:10.469848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.469983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.470012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.470149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.470224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.470421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.470464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.470601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.470669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.470855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.470882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.470990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.471031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.471208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.471250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.471411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.471457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.471627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.471670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.471824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.471873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.471960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.471984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.472099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.472124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.472259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.472301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.472442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.472498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.472690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.472743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.472910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.472935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.473045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.473071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.473190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.473214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.473348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.473388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.473534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.473590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.473752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.473794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.473929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.473955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.474033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.474059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.474178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.474209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.474381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.474430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.474578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.474620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.474855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.474900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.474992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.475017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.475103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.454 [2024-11-06 09:05:10.475162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.454 qpair failed and we were unable to recover it. 00:28:57.454 [2024-11-06 09:05:10.475367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.475410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.475539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.475581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.475701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.475766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.475923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.475948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.476067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.476092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.476238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.476281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.476469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.476511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.476712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.476774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.476950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.476975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.477065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.477090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.477226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.477312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.477589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.477651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.477815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.477886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.478006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.478031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.478146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.478170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.478279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.478305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.478481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.478522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.478653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.478704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.478844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.478895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.478983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.479008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.479128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.479152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.479269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.479294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.479477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.479526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.479684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.479740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.479944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.479970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.480056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.480080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.480228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.480269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.480414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.480456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.480632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.480696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.480874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.480914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.481014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.481043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.481133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.481161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.481282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.481334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.481508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.481535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.455 [2024-11-06 09:05:10.481631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.455 [2024-11-06 09:05:10.481670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.455 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.481764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.481791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.481913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.481953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.482105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.482164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.482296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.482340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.482482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.482528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.482703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.482747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.482947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.482975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.483083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.483131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.483302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.483363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.483506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.483563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.483736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.483781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.483937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.483977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.484110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.484140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.484241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.484266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.484490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.484542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.484674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.484729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.484856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.484882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.484956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.484982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.485121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.485164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.485332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.485376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.485515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.485557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.485700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.485726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.485817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.485867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.485960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.485988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.486157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.486202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.486351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.486396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.486563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.486606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.486773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.486812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.486933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.486961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.487057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.487096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.487223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.487267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.487500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.487539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.487737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.487776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.456 qpair failed and we were unable to recover it. 00:28:57.456 [2024-11-06 09:05:10.487952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.456 [2024-11-06 09:05:10.487978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.488120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.488146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.488313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.488353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.488499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.488538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.488800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.488908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.488991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.489016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.489108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.489143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.489257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.489297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.489462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.489501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.489639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.489666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.489806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.489840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.489950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.489975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.490112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.490156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.490328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.490367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.490560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.490599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.490793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.490843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.490957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.490981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.491061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.491085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.491218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.491258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.491383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.491408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.491584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.491632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.491789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.491828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.491959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.491984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.492107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.492131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.492242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.492281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.492420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.492467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.492601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.492639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.492767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.492805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.492972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.492999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.493112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.493137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.493266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.493306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.493459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.493484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.493571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.493595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.493701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.493740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.457 [2024-11-06 09:05:10.493900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.457 [2024-11-06 09:05:10.493926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.457 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.494060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.494090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.494171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.494196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.494297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.494322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.494430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.494455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.494599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.494638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.494796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.494821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.494903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.494928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.495018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.495044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.495190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.495230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.495383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.495421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.495575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.495614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.495737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.495777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.495928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.495991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.496232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.496296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.496488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.496537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.496677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.496735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.496941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.496990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.497183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.497230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.497394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.497444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.497672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.497738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.497901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.497942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.498064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.498132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.498354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.498394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.498584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.498632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.498862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.498903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.499039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.499079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.499225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.499274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.499391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.499437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.499595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.499635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.499827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.499876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.500037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.500078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.500239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.500279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.500409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.500449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.500611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.458 [2024-11-06 09:05:10.500652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.458 qpair failed and we were unable to recover it. 00:28:57.458 [2024-11-06 09:05:10.500818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.500879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.501024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.501063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.501235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.501275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.501441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.501486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.501622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.501662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.501780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.501820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.501983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.502029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.502152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.502193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.502396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.502436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.502558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.502598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.502755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.502795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.502957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.502997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.503163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.503203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.503339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.503379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.503504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.503545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.503695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.503736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.503869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.503910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.504042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.504082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.504273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.504313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.504478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.504518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.504686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.504747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.504885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.504927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.505069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.505109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.505270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.505309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.505470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.505510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.505664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.505705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.505866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.505906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.506034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.506074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.459 qpair failed and we were unable to recover it. 00:28:57.459 [2024-11-06 09:05:10.506210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.459 [2024-11-06 09:05:10.506249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.506408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.506447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.506558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.506598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.506747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.506787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.506985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.507025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.507185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.507224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.507418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.507476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.507614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.507678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.507898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.507939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.508105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.508145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.508339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.508378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.508563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.508603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.508761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.508801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.508989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.509029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.509188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.509227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.509394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.509433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.509579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.509618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.509769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.509810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.509958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.510000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.510155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.510196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.510352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.510393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.510562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.510602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.510762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.510802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.510972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.511012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.511203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.511243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.511378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.511419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.511602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.511641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.511881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.511922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.512092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.512132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.512292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.512331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.512496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.512535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.512725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.512765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.512910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.512950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.513138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.513180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.460 [2024-11-06 09:05:10.513307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.460 [2024-11-06 09:05:10.513350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.460 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.513543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.513585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.513760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.513802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.513945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.513988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.514147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.514188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.514381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.514423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.514625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.514667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.514819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.514869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.515027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.515069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.515217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.515260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.515417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.515459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.515664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.515706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.515867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.515911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.516085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.516129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.516319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.516361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.516503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.516544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.516709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.516751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.516923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.516966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.517131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.517173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.517377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.517419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.517600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.517642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.517843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.517885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.518069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.518111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.518268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.518311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.518457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.518498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.518624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.518666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.518864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.518913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.519078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.519120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.519291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.519333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.519526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.519568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.519734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.519776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.519946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.519989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.520125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.520166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.520328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.520370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.461 [2024-11-06 09:05:10.520500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.461 [2024-11-06 09:05:10.520542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.461 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.520718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.520780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.520989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.521031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.521223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.521265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.521397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.521439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.521691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.521754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.521956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.521999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.522169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.522210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.522375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.522417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.522547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.522589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.522786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.522827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.522991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.523033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.523189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.523231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.523390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.523453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.523687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.523751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.524027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.524091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.524295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.524358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.524567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.524630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.524858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.524901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.525052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.525102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.525240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.525283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.525406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.525448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.525609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.525651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.525827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.525878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.526073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.526115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.526261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.526303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.526459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.526501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.526659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.526701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.526877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.526920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.527151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.527196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.527362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.527406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.527612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.527656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.527860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.527906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.528068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.528112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.528331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.462 [2024-11-06 09:05:10.528375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.462 qpair failed and we were unable to recover it. 00:28:57.462 [2024-11-06 09:05:10.528531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.528575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.528737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.528782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.528973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.529018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.529238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.529282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.529429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.529473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.529647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.529692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.529918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.529963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.530113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.530158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.530331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.530376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.530557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.530601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.530749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.530794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.530957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.531010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.531140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.531185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.531391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.531435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.531612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.531657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.531872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.531917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.532063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.532107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.532274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.532319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.532499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.532544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.532753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.532797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.532976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.533021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.533198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.533243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.533458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.533503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.533638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.533682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.533856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.533901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.534097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.534142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.534309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.534353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.534567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.534611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.534797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.534852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.535032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.535077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.535233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.535277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.535421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.535467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.535711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.535759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.536016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.536065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.463 qpair failed and we were unable to recover it. 00:28:57.463 [2024-11-06 09:05:10.536246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.463 [2024-11-06 09:05:10.536294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.536465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.536512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.536737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.536784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.536992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.537041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.537274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.537321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.537499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.537548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.537691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.537738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.537929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.537978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.538124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.538172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.538326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.538373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.538565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.538612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.538850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.538900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.539134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.539181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.539416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.539463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.539604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.539652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.539856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.539904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.540068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.540112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.540255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.540299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.540471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.540521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.540666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.540710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.540868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.540913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.541088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.541135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.541293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.541341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.541538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.541585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.541809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.541872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.542056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.542103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.542301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.542348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.542492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.542541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.542763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.542810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.543011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.543059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.464 [2024-11-06 09:05:10.543256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.464 [2024-11-06 09:05:10.543302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.464 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.543479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.543526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.543723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.543770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.543963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.544011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.544170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.544217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.544444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.544492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.544636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.544683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.544864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.544912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.545061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.545109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.545294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.545341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.545484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.545532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.545699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.545746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.545959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.546006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.546169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.546217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.546368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.546415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.546563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.546620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.546822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.546880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.547069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.547117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.547297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.547343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.547533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.547580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.547738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.547786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.547986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.548035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.548258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.548305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.548467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.548513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.548651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.548698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.548881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.548931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.549154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.549202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.549382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.549431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.549581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.549628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.549802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.549862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.550059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.550107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.550291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.550339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.550559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.550606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.550746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.550795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.465 [2024-11-06 09:05:10.551001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.465 [2024-11-06 09:05:10.551049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.465 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.551288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.551335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.551525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.551572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.551756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.551805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.551994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.552042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.552241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.552298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.552519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.552567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.552796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.552886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.553103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.553159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.553399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.553454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.553677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.553725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.553917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.553965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.554134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.554182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.554404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.554452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.554601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.554648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.554811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.554876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.555074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.555126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.555361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.555411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.555614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.555671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.555827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.555886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.556125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.556176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.556384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.556432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.556608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.556655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.556875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.556924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.557118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.557165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.557364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.557411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.557591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.557638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.557803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.557865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.558064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.558112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.558302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.558350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.558517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.558564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.558748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.558796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.559060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.559136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.559378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.559429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.466 [2024-11-06 09:05:10.559581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.466 [2024-11-06 09:05:10.559648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.466 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.559849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.559904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.560099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.560159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.560353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.560403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.560596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.560650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.560880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.560932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.561117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.561168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.561365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.561415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.561613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.561664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.561881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.561932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.562133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.562183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.562359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.562409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.562643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.562693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.562884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.562935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.563089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.563139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.563345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.563403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.563564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.563615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.563823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.563888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.564089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.564141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.564372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.564423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.564602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.564652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.564893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.564946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.565146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.565197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.565391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.565441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.565671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.565729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.565954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.566006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.566166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.566217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.566420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.566470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.566702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.566761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.566997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.567049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.567204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.567253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.567396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.567446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.567666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.567726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.567989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.568066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.467 [2024-11-06 09:05:10.568376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.467 [2024-11-06 09:05:10.568451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.467 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.568754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.568813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.569080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.569158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.569433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.569494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.569783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.569857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.570131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.570203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.570498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.570575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.570886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.570938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.571095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.571174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.571366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.571421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.571584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.571637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.571881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.571936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.572143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.572198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.572429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.572479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.572630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.572681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.572926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.572979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.573174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.573225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.573364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.573414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.573620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.573671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.573884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.573937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.574099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.574148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.574338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.574388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.574597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.574647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.574827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.574890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.575071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.575122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.575325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.575376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.575587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.575637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.575856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.575908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.576123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.576175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.576419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.576469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.576656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.576706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.576902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.576955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.577117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.577169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.577352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.577402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.577623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.468 [2024-11-06 09:05:10.577677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.468 qpair failed and we were unable to recover it. 00:28:57.468 [2024-11-06 09:05:10.577877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.577941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.578163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.578217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.578470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.578523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.578740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.578793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.578981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.579035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.579266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.579319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.579474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.579528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.579741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.579795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.580045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.580098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.580265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.580320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.580568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.580622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.580877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.580933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.581150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.581203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.581418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.581472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.581716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.581770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.581995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.582049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.582298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.582352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.582599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.582653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.582863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.582917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.583135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.583189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.583405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.583459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.583662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.583715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.583863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.583917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.584145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.584198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.584405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.584458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.584688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.584746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.584991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.585046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.585258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.585321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.585529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.585583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.585818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.469 [2024-11-06 09:05:10.585918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.469 qpair failed and we were unable to recover it. 00:28:57.469 [2024-11-06 09:05:10.586137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.586191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.586399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.586452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.586661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.586714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.586943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.586999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.587259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.587313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.587490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.587544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.587697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.587751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.587991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.588046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.588221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.588276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.588483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.588537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.588753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.588806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.589026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.589080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.589287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.589343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.589559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.589613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.589847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.589921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.590172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.590225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.590451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.590508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.590711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.590769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.591027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.591086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.591307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.591365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.591596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.591650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.591885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.591941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.592151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.592206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.592415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.592471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.592626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.592681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.592885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.592940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.593188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.593242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.593429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.593483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.593724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.593778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.593949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.594004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.594204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.594258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.594472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.594527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.594727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.594800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.595049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.595107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.470 qpair failed and we were unable to recover it. 00:28:57.470 [2024-11-06 09:05:10.595341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.470 [2024-11-06 09:05:10.595400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.595620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.595679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.595912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.595967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.596216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.596270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.596513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.596568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.596776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.596866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.597056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.597109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.597320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.597374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.597566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.597620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.597862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.597941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.598195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.598249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.598495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.598549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.598775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.598828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.599049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.599123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.599380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.599438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.599631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.599688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.599903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.599962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.600147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.600204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.600445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.600502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.600672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.600730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.600950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.601009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.601297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.601354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.601541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.601599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.601869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.601929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.602205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.602262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.602524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.602583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.602857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.602917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.603169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.603226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.603411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.603468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.603706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.603765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.604000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.604061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.604267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.604336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.604563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.604622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.604860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.604920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.471 [2024-11-06 09:05:10.605172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.471 [2024-11-06 09:05:10.605240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.471 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.605412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.605473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.605644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.605702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.605944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.606004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.606208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.606266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.606479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.606536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.606819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.606890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.607088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.607146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.607408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.607465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.607649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.607707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.607973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.608033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.608313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.608372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.608618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.608675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.608916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.608975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.609215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.609274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.609452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.609509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.609704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.609762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.609990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.610050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.610274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.610331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.610599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.610657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.610873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.610932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.611198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.611257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.611477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.611535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.611800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.611868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.612093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.612162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.612434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.612492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.612721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.612778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.612992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.613051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.613270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.613328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.613567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.613624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.613865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.613925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.614162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.614221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.614442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.614500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.614711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.614769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.614957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.472 [2024-11-06 09:05:10.615014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-11-06 09:05:10.615261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.615319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.615550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.615608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.615863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.615923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.616165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.616224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.616421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.616479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.616680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.616737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.616992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.617053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.617283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.617341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.617534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.617592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.617791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.617867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.618114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.618172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.618411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.618488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.618745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.618803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.619069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.619154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.619404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.619463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.619695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.619752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.619956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.620025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.620239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.620297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.620544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.620620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.620929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.621007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.621299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.621375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.621613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.621671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.621920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.622000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.622249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.622326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.622597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.622654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.622926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.623005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.623278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.623355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.623613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.623671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.623888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.623948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.624190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.624265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.624536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.624594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.624780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.624855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.625122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.625199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.625422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.473 [2024-11-06 09:05:10.625497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-11-06 09:05:10.625771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.625828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.626119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.626195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.626391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.626468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.626694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.626751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.626999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.627076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.627372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.627448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.627707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.627764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.628094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.628172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.628415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.628491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.628692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.628749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.629011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.629090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.629327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.629403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.629624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.629682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.629868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.629928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.630191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.630267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.630517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.630593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.630799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.630869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.631103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.631179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.631480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.631556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.631858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.631919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.632217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.632293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.632536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.632611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.632844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.632904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.633216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.633298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.633587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.633662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.633871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.633932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.634137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.634214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.634419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.634495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.634724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.634783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.635049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.635128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.474 qpair failed and we were unable to recover it. 00:28:57.474 [2024-11-06 09:05:10.635415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.474 [2024-11-06 09:05:10.635491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.635739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.635796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.636077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.636153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.636417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.636493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.636727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.636785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.637072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.637152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.637409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.637485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.637719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.637779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.638090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.638176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.638455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.638513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.638778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.638857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.639121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.639180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.639386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.639463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.639699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.639757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.640038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.640115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.640408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.640485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.640702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.640761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.641074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.641152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.641403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.641479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.641677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.641734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.642008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.642094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.642337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.642413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.642673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.642730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.642980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.643057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.643349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.643425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.643654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.643713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.643987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.644065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.644336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.644397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.644665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.644723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.475 qpair failed and we were unable to recover it. 00:28:57.475 [2024-11-06 09:05:10.644983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.475 [2024-11-06 09:05:10.645042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.645250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.645327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.645522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.645580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.645816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.645886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.646124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.646208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.646415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.646493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.646679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.646738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.646959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.647018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.647221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.647280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.647520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.647578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.647861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.647921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.648143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.648201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.648464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.648522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.648788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.648863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.649175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.649251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.649545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.649621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.649935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.649997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.650242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.650317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.650548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.650616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.650907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.650985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.651227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.651303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.651548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.651624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.651859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.651937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.652130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.652207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.652428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.652504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.652734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.652793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.653050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.653128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.653365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.653441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.653665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.653726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.653993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.654054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.654341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.654418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.654675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.654732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.654968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.655046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.655336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.655412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.476 [2024-11-06 09:05:10.655633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.476 [2024-11-06 09:05:10.655690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.476 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.655973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.656052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.656340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.656418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.659060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.659151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.659483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.659563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.659804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.659903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.660139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.660197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.660458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.660534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.660741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.660799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.661062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.661122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.661396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.661474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.661678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.661736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.661955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.662016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.662294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.662372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.662638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.662714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.663007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.663085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.663349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.663426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.663700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.663757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.663982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.664059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.664312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.664397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.664673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.664731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.664990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.665068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.665310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.665387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.665571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.665629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.665880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.665940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.666195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.666272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.666582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.666669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.666952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.667030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.667335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.667412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.667645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.667704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.667961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.668041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.668240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.668316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.668519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.668578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.477 [2024-11-06 09:05:10.668755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.477 [2024-11-06 09:05:10.668816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.477 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.669096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.669173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.669467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.669544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.669762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.669819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.670124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.670202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.670442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.670501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.670721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.670779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.671098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.671176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.671382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.671458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.671681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.671738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.672043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.672121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.672362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.672421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.672680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.672738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.673028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.673104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.673402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.673478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.673670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.673727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.674057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.674117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.674415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.674491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.674717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.674775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.675010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.675103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.675392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.675452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.675622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.675681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.675874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.675934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.676197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.676274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.676485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.676574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.676780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.676862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.677128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.677205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.677465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.677524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.677722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.677780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.678048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.678126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.678404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.678480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.678711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.678771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.679053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.679112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.679399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.679475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.679703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.679761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.680066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.680155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.478 [2024-11-06 09:05:10.680330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.478 [2024-11-06 09:05:10.680388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.478 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.680593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.680658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.680895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.680956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.681196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.681271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.681559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.681635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.681860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.681920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.682214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.682291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.682551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.682627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.682850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.682910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.683146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.683222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.683448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.683536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.683725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.683783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.684040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.684119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.684381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.684459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.684680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.684742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.684983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.685061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.685330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.685388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.685595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.685653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.685863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.685922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.686123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.686196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.686491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.686568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.686758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.686815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.687101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.687160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.687392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.687449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.687723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.687781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.688005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.688064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.688300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.688358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.688600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.688657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.688917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.688995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.689299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.689377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.689608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.689666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.689848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.689909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.690154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.479 [2024-11-06 09:05:10.690231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.479 qpair failed and we were unable to recover it. 00:28:57.479 [2024-11-06 09:05:10.690443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.690523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.690767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.690825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.691136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.691213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.691475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.691552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.691730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.691797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.692108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.692184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.692489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.692566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.692755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.692814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.693050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.693127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.693381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.693457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.693693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.693752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.694070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.694147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.694435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.694511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.694752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.694810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.695080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.695157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.695390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.695468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.695681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.695738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.695978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.696057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.696260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.696336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.696538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.696595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.696815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.696906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.697167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.697242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.697465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.697541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.697742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.697802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.698008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.698067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.698339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.698398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.698633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.698691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.698881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.698940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.699250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.699329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.699592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.699669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.699923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.700000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.700230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.700305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.700524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.700587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.700811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.480 [2024-11-06 09:05:10.700895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.480 qpair failed and we were unable to recover it. 00:28:57.480 [2024-11-06 09:05:10.701125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.701201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.701432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.701490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.701764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.701845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.702076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.702135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.702371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.702447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.702670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.702728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.702992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.703071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.703340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.703415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.703647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.703708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.703985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.704063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.704336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.704412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.481 [2024-11-06 09:05:10.704608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.481 [2024-11-06 09:05:10.704668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.481 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.704939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.705018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.705258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.705334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.705558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.705616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.705785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.705856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.706119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.706194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.706462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.706539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.706771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.706830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.707059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.707135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.707357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.707434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.707668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.707726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.758 qpair failed and we were unable to recover it. 00:28:57.758 [2024-11-06 09:05:10.707983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.758 [2024-11-06 09:05:10.708067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.708261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.708337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.708572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.708630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.708866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.708925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.709190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.709265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.709512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.709587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.709811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.709881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.710132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.710210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.710453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.710529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.710715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.710773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.711027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.711115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.711403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.711478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.711756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.711814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.712046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.712126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.712363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.712438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.712715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.712773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.713097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.713183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.713392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.713468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.713687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.713745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.713985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.714044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.714299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.714377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.714600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.714658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.714850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.714908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.715207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.715282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.715572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.715649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.715926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.716004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.716270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.716347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.716603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.716661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.716962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.717039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.717329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.717405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.717657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.717715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.718003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.718081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.718379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.718455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.718678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.718736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.759 [2024-11-06 09:05:10.718967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.759 [2024-11-06 09:05:10.719043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.759 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.719288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.719363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.719539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.719596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.719815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.719889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.720153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.720230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.720521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.720596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.720826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.720915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.721202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.721279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.721563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.721639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.721912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.721981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.722189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.722265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.722561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.722638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.722867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.722927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.723117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.723192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.723429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.723505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.723769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.723827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.724065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.724141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.724435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.724511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.724785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.724872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.725141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.725199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.725456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.725533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.725732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.725790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.726002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.726081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.726382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.726460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.726748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.726806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.727114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.727191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.727464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.727541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.727800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.727874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.728174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.728250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.728458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.728534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.728698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.728756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.728989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.729069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.729357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.729435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.729667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.729725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.760 [2024-11-06 09:05:10.729949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.760 [2024-11-06 09:05:10.730027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.760 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.730331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.730406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.730630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.730687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.730911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.730990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.731247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.731322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.731583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.731661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.731929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.732007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.732206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.732282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.732579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.732655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.732914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.732992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.733269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.733344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.733575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.733633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.733862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.733922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.734160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.734218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.734506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.734583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.734779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.734849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.735116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.735192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.735478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.735555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.735818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.735889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.736105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.736181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.736470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.736546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.736738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.736796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.737111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.737188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.737396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.737472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.737704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.737762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.738095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.738172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.738467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.738543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.738771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.738829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.739146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.739223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.739438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.739515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.739808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.739884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.740150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.740210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.740502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.740578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.740807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.740913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.761 [2024-11-06 09:05:10.741206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.761 [2024-11-06 09:05:10.741283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.761 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.741578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.741653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.741924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.741983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.742274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.742350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.742637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.742714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.743006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.743082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.743333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.743411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.743671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.743730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.743992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.744069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.744316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.744403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.744669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.744728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.744993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.745070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.745366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.745442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.745706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.745763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.745997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.746075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.746364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.746439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.746618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.746676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.746918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.746998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.747246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.747322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.747589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.747646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.747942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.748020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.748299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.748376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.748619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.748677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.748874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.748934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.749197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.749274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.749439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.749497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.749729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.749787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.750053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.750129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.750429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.750505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.762 qpair failed and we were unable to recover it. 00:28:57.762 [2024-11-06 09:05:10.750762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.762 [2024-11-06 09:05:10.750820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.751122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.751197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.751468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.751527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.751746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.751805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.752019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.752096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.752364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.752440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.752666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.752725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.753014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.753102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.753381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.753440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.753714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.753770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.754050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.754127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.754417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.754493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.754716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.754773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.755025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.755102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.755353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.755429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.755642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.755698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.755960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.756038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.756298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.756356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.756624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.756681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.756950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.757027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.757321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.757397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.757645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.757702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.757987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.758064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.758371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.758446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.758693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.758750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.758975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.759054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.759355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.759429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.759708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.759766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.760004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.760064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.760358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.763 [2024-11-06 09:05:10.760435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.763 qpair failed and we were unable to recover it. 00:28:57.763 [2024-11-06 09:05:10.760693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.760751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.761006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.761082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.761323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.761400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.761675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.761733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.762006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.762075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.762322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.762398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.762635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.762693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.762938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.763015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.763278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.763354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.763540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.763597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.763785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.763853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.764102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.764178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.764469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.764544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.764789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.764867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.765123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.765199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.765499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.765574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.765801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.765875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.766157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.766233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.766502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.766579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.766765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.766823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.767080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.767155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.767370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.767445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.767707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.767765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.768044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.768121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.768312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.768390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.768560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.768618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.768929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.768989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.769249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.769307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.764 qpair failed and we were unable to recover it. 00:28:57.764 [2024-11-06 09:05:10.769554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.764 [2024-11-06 09:05:10.769631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.769891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.769970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.770215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.770291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.770482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.770541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.770827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.770896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.771157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.771233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.771515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.771574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.771795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.771864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.772123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.772199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.772452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.772511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.772675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.772733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.773021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.773099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.773335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.773413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.773626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.773686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.773931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.774009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.774209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.774286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.774500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.774558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.774827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.774906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.775207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.775283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.775580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.775656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.775923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.776001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.776242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.776317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.776587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.776662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.776947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.777024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.777238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.777314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.777537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.777594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.777827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.777901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.778130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.778188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.778437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.778511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.778690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.778748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.778964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.779024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.779316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.779392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.779574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.765 [2024-11-06 09:05:10.779632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.765 qpair failed and we were unable to recover it. 00:28:57.765 [2024-11-06 09:05:10.779864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.779924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.780114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.780171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.780425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.780482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.780685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.780743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.781024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.781082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.781333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.781411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.781601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.781660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.781856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.781915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.782147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.782205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.782427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.782486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.782721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.782777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.783080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.783148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.783370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.783447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.783710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.783767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.784081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.784159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.784351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.784427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.784691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.784749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.785011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.785089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.785375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.785451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.785670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.785728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.785939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.786017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.786252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.786328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.786557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.786615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.786902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.786982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.787227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.787303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.787615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.787691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.787941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.788018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.788302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.788378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.788623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.788700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.788949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.789026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.789302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.789377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.789619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.789677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.789949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.790026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.766 qpair failed and we were unable to recover it. 00:28:57.766 [2024-11-06 09:05:10.790245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.766 [2024-11-06 09:05:10.790321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.790561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.790638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.790871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.790931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.791154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.791231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.791516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.791593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.791862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.791930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.792136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.792212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.792441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.792517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.792737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.792795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.793042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.793119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.793407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.793483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.793749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.793807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.794052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.794127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.794352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.794426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.794643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.794701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.794998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.795075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.795340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.795398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.795668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.795727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.795962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.796037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.796329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.796406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.796608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.796668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.796895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.796974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.797237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.797296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.797491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.797550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.797815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.797884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.798141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.798217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.798498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.798574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.798802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.798876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.799170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.799245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.799531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.799607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.799826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.799899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.767 [2024-11-06 09:05:10.800153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.767 [2024-11-06 09:05:10.800231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.767 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.800487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.800563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.800858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.800917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.801198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.801272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.801505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.801582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.801784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.801857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.802131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.802190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.802445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.802522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.802753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.802811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.803089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.803165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.803349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.803428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.803691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.803749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.804074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.804151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.804407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.804484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.804714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.804772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.805062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.805145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.805389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.805466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.805664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.805721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.805911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.805990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.806233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.806308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.806546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.806622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.806899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.806958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.807257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.807332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.807522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.807580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.807841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.807900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.808146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.808222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.808518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.808594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.808823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.808909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.809166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.809243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.809536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.809613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.809876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.809936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.810233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.810307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.810526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.810601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.768 [2024-11-06 09:05:10.810823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.768 [2024-11-06 09:05:10.810894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.768 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.811122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.811181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.811427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.811503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.811716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.811774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.812012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.812090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.812375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.812450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.812686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.812743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.813037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.813113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.813350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.813426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.813708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.813774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.814023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.814100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.814391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.814467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.814701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.814759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.815033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.815110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.815361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.815437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.815605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.815664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.815903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.815983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.816208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.816283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.816519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.816593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.816822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.816892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.817114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.817172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.817377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.817453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.817679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.817738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.818041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.818119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.818316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.818391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.818616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.818674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.818940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.819018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.819285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.819343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.819560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.819618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.819859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.769 [2024-11-06 09:05:10.819918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.769 qpair failed and we were unable to recover it. 00:28:57.769 [2024-11-06 09:05:10.820148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.820224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.820434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.820493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.820659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.820716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.820972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.821050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.821280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.821356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.821645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.821720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.821999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.822086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.822370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.822446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.822637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.822695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.822965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.823043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.823272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.823348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.823559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.823617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.823844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.823904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.824162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.824240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.824535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.824611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.824884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.824944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.825233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.825310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.825566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.825642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.825867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.825927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.826175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.826250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.826549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.826626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.826913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.826990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.827233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.827308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.827580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.827656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.827905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.827984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.828283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.828359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.828591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.828649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.828928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.829005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.829232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.829308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.829531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.829589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.829811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.829888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.770 qpair failed and we were unable to recover it. 00:28:57.770 [2024-11-06 09:05:10.830137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.770 [2024-11-06 09:05:10.830212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.830447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.830523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.830786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.830864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.831156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.831233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.831533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.831609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.831889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.831968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.832221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.832296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.832520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.832597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.832821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.832894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.833083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.833160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.833387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.833464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.833692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.833751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.834026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.834104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.834396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.834472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.834704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.834763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.834989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.835066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.835282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.835359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.835585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.835644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.835930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.836007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.836314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.836390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.836658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.836715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.836966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.837043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.837283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.837358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.837582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.837641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.837844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.837903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.838144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.838219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.838491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.838549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.838771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.838829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.839098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.839175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.839413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.839487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.839712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.839770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.840043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.840120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.840386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.840461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.771 [2024-11-06 09:05:10.840655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.771 [2024-11-06 09:05:10.840712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.771 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.840962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.841038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.841327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.841403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.841598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.841656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.841880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.841939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.842233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.842308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.842554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.842629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.842921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.842998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.843231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.843290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.843536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.843614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.843797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.843876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.844067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.844144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.844422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.844500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.844734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.844792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.845087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.845167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.845388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.845466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.845686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.845743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.845988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.846065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.846317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.846392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.846594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.846651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.846912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.846991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.847269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.847328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.847501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.847559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.847734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.847792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.848038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.848096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.848312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.848369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.848633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.848692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.848939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.849017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.849264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.849340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.849604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.849661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.849883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.849942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.850213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.850271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.850478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.850536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.850796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.850872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.851173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.772 [2024-11-06 09:05:10.851251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.772 qpair failed and we were unable to recover it. 00:28:57.772 [2024-11-06 09:05:10.851540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.851616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.851814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.851889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.852117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.852202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.852488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.852565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.852801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.852878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.853165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.853241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.853513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.853588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.853812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.853902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.854150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.854208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.854461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.854536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.854756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.854816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.855128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.855206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.855492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.855569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.855742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.855799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.856063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.856139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.856443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.856519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.856725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.856783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.857085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.857168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.857459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.857534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.857795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.857867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.858159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.858235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.858522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.858598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.858863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.858922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.859140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.859198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.859463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.859540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.859816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.859883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.860104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.860162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.860392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.860469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.860770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.773 [2024-11-06 09:05:10.860856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.773 qpair failed and we were unable to recover it. 00:28:57.773 [2024-11-06 09:05:10.861083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.861150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.861401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.861477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.861737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.861794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.862096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.862181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.862428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.862504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.862695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.862752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.863011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.863088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.863328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.863388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.863638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.863697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.863867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.863926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.864196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.864273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.864516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.864574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.864847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.864906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.865114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.865190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.865495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.865571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.865759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.865817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.865998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.866056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.866276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.866335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.866555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.866631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.866890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.866948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.867213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.867272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.867520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.867597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.867811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.867885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.868172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.868246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.868526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.868601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.868809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.868883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.869143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.869219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.869508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.869584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.869872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.869932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.774 [2024-11-06 09:05:10.870222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.774 [2024-11-06 09:05:10.870297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.774 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.870520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.870595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.870804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.870875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.871104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.871161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.871452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.871527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.871756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.871814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.872057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.872115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.872404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.872480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.872740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.872799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.873009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.873067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.873306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.873383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.873647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.873723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.874089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.874150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.874351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.874429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.874694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.874751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.875008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.875085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.875344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.875419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.875605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.875662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.875886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.875946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.876233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.876309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.876564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.876641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.876923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.877000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.877243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.877319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.877583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.877644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.877927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.878005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.878222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.878280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.878560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.878617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.878907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.878985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.879227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.879302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.879591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.879667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.879890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.879950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.880252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.880329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.880617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.880692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.775 qpair failed and we were unable to recover it. 00:28:57.775 [2024-11-06 09:05:10.880955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.775 [2024-11-06 09:05:10.881032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.881318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.881395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.881650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.881708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.881995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.882072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.882312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.882388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.882640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.882697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.882987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.883075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.883361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.883439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.883683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.883741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.884065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.884143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.884376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.884452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.884665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.884723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.884984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.885044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.885275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.885351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.885606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.885664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.885917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.885996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.886260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.886336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.886549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.886607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.886888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.886968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.887257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.887333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.887596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.887671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.887952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.888029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.888225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.888302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.888535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.888610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.888861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.888921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.889218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.889292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.889375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1861f30 (9): Bad file descriptor 00:28:57.776 [2024-11-06 09:05:10.889805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.889931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.890256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.890323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.890610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.890673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.890946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.891007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.891278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.891341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.891592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.891655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.776 qpair failed and we were unable to recover it. 00:28:57.776 [2024-11-06 09:05:10.891887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.776 [2024-11-06 09:05:10.891949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.892249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.892313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.892605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.892668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.892892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.892953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.893217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.893284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.893612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.893676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.893936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.893998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.894311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.894376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.894710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.894773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.895044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.895104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.895385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.895449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.895773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.895850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.896054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.896113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.896406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.896469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.896797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.896890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.897175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.897239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.897443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.897508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.897819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.897910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.898201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.898265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.898562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.898626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.898929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.898988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.899238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.899301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.899555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.899619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.899881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.899943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.900219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.900278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.900464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.900544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.900874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.900934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.901224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.901298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.901544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.901608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.901899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.901959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.902250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.902314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.777 [2024-11-06 09:05:10.902528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.777 [2024-11-06 09:05:10.902593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.777 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.902828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.902901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.903119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.903198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.903492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.903554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.903894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.903954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.904258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.904317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.904570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.904633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.904934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.904994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.905290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.905354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.905594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.905656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.905898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.905960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.906254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.906318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.906516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.906579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.906907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.906967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.907207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.907271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.907496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.907559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.907896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.907957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.908244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.908307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.908592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.908655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.908978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.909043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.909297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.909362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.909609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.909672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.909968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.910034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.910318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.910385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.910686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.910748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.911028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.911095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.911346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.911410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.911648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.911711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.911906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.911971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.912207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.912273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.912512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.912576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.912828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.912905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.913185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.913248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.913488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.778 [2024-11-06 09:05:10.913553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.778 qpair failed and we were unable to recover it. 00:28:57.778 [2024-11-06 09:05:10.913789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.913871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.914120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.914184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.914434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.914509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.914752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.914816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.915087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.915150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.915420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.915484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.915693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.915757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.916028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.916092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.916352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.916415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.916666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.916731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.917014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.917080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.917387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.917451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.917734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.917799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.918102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.918167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.918411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.918476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.918727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.918790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.919014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.919082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.919364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.919427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.919669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.919734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.920016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.920083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.920310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.920374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.920668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.920733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.921004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.921071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.921297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.921362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.921644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.921706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.921938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.922003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.922220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.922283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.922465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.922529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.922760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.922824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.923058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.923123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.923347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.923410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.923711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.923773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.924089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.924154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.779 qpair failed and we were unable to recover it. 00:28:57.779 [2024-11-06 09:05:10.924396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.779 [2024-11-06 09:05:10.924462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.924752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.924816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.925091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.925157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.925368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.925432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.925672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.925735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.925956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.926021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.926299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.926362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.926618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.926681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.926965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.927030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.927279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.927343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.927656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.927720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.927946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.928012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.928218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.928284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.928581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.928644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.928857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.928923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.929172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.929235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.929524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.929587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.929802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.929887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.930093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.930157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.930438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.930501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.930787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.930863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.931150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.931214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.931424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.931487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.931795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.931875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.932120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.932185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.932462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.932526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.932777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.932858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.933106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.933173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.933426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.780 [2024-11-06 09:05:10.933492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.780 qpair failed and we were unable to recover it. 00:28:57.780 [2024-11-06 09:05:10.933782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.933861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.934167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.934232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.934488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.934551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.934850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.934915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.935222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.935284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.935572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.935635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.935883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.935948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.936143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.936224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.936472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.936537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.936751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.936816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.937088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.937153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.937385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.937449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.937695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.937758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.937948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.938013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.938226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.938292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.938505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.938570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.938768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.938850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.939082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.939146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.939403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.939467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.939759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.939822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.940116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.940180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.940488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.940552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.940812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.940897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.941099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.941163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.941407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.941471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.941666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.941731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.941996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.942061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.942249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.942310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.942521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.942585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.942829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.942915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.943126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.943192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.943385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.943450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.943730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.781 [2024-11-06 09:05:10.943794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.781 qpair failed and we were unable to recover it. 00:28:57.781 [2024-11-06 09:05:10.944067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.944131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.944420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.944484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.944670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.944730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.944948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.945013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.945254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.945317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.945607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.945673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.945980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.946044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.946250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.946313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.946554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.946617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.946827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.946903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.947131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.947194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.947398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.947462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.947693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.947756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.948004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.948069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.948305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.948378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.948635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.948698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.948945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.949011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.949268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.949330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.949611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.949674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.949886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.949954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.950254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.950316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.950608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.950671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.950870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.950937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.951191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.951254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.951509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.951572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.951777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.951865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.952112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.952178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.952426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.952491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.952756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.952820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.953077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.953143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.953381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.953443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.953662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.953725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.953979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.954044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.782 [2024-11-06 09:05:10.954239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.782 [2024-11-06 09:05:10.954302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.782 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.954496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.954558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.954854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.954919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.955168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.955234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.955479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.955545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.955787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.955886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.956140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.956206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.956459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.956524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.956776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.956857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.957144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.957207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.957455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.957521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.957721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.957785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.958042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.958104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.958342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.958406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.958645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.958708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.958962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.959025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.959262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.959326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.959576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.959640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.959939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.960003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.960228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.960291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.960576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.960641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.960863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.960951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.961211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.961275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.961555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.961618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.961810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.961891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.962170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.962234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.962436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.962500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.962782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.962863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.963120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.963183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.963432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.963498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.963751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.963814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.964082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.964146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.964435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.964498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.964755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.964817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.783 [2024-11-06 09:05:10.965088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.783 [2024-11-06 09:05:10.965151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.783 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.965399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.965464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.965686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.965750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.966054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.966120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.966331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.966395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.966604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.966668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.966912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.966979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.967226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.967292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.967574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.967637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.967917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.967983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.968242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.968306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.968559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.968622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.968826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.968907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.969169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.969233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.969532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.969596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.969856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.969922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.970170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.970234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.970453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.970517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.970739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.970802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.971107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.971170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.971363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.971428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.971671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.971734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.971936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.972001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.972222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.972286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.972537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.972600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.972854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.972919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.973174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.973238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.973481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.973559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.973861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.973926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.974200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.974265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.974557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.974620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.974867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.974934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.975147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.975211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.975446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.975509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.784 [2024-11-06 09:05:10.975759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.784 [2024-11-06 09:05:10.975823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.784 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.976126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.976190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.976484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.976546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.976790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.976869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.977126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.977190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.977422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.977486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.977678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.977742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.978023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.978088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.978378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.978441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.978735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.978799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.979062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.979125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.979366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.979431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.979727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.979791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.980036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.980102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.980313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.980377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.980596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.980660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.980877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.980943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.981192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.981260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.981507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.981570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.981823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.981902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.982214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.982280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.982485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.982548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.982793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.982878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.983165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.983229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.983455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.983518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.983748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.983812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.984084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.984148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.984393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.984458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.984738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.984802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.785 [2024-11-06 09:05:10.985113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.785 [2024-11-06 09:05:10.985176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.785 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.985418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.985485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.985730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.985793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.986107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.986169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.986454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.986528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.986788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.986870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.987157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.987221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.987486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.987549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.987866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.987932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.988180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.988243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.988455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.988518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.988795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.988882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.989179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.989241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.989482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.989545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.989788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.989872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.990121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.990184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.990400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.990465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.990725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.990788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.991079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.991144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.991406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.991468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.991752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.991814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.992104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.992167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.992375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.992440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.992731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.992793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.993058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.993122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.993374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.993438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.993725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.993787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.994060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.994124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.994371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.994436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.994721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.994784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.995092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.995156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.995368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.995435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.995692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.786 [2024-11-06 09:05:10.995758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.786 qpair failed and we were unable to recover it. 00:28:57.786 [2024-11-06 09:05:10.996005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.996070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.996324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.996387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.996637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.996700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.996945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.997012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.997289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.997352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.997608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.997672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.997881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.997946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.998185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.998248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.998516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.998580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.998870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.998935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.999139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.999205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.999463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.999543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:10.999846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:10.999912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.000138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.000202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.000401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.000464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.000711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.000775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.001040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.001106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.001309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.001373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.001624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.001691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.001986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.002051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.002247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.002313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.002548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.002613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.002865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.002929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.003139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.003202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.003462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.003525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.003754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.003817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.004072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.004135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.004332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.004396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.004640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.004703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.004985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.005050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.005289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.005354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.005539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.005602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.005900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.005965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.006216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.787 [2024-11-06 09:05:11.006283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.787 qpair failed and we were unable to recover it. 00:28:57.787 [2024-11-06 09:05:11.006480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.006547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.006791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.006871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.007092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.007158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.007400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.007463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.007713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.007777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.008080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.008144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.008406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.008470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.008726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.008791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.009061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.009125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.009331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.009397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.009640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.009704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.009933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.009997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.010248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.010312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.010558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.010623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.010847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.010912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.011196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.011260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.011482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.011547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.011773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.011863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.012119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.012182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.012400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.012464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.012738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.012802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.013048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.013112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.013318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.013380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.013668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.013732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.013984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.014051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.014300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.014364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.014600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.014664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.015021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.015088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.015336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.015400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.015647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.015711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.015893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.015959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.016213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.016276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.016580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.016643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.788 qpair failed and we were unable to recover it. 00:28:57.788 [2024-11-06 09:05:11.016859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.788 [2024-11-06 09:05:11.016925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.017174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.017237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.017481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.017544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.017863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.017928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.018182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.018245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.018486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.018551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.018775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.018853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.019088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.019151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.019393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.019457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.019762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.019826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.020132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.020196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.020411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.020477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.020721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.020784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.020985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.021050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.021296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.021362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.021571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.021634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.021926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.021990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.022271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.022333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.022613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.022676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.022971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.023036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.023330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.023394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.023645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.023709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.023996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.024062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.024260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.024325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.024608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.024683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.024961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.025028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.025247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.025311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.025521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.025584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.025789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.025881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.026138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.026201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.026395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.026461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.026675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.026741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.026962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.027026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.789 [2024-11-06 09:05:11.027312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.789 [2024-11-06 09:05:11.027378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.789 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.027674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.027738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.027962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.028028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.028277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.028342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.028584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.028649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.028909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.028974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.029216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.029279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.029523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.029589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.029787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.029898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.030169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.030234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.030448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.030511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.030738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.030802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.031114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.031179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.031405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.031467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:57.790 [2024-11-06 09:05:11.031711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:57.790 [2024-11-06 09:05:11.031774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:57.790 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.031979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.032046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.032262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.032326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.032552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.032616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.032864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.032933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.033153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.033219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.033463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.033528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.033768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.033852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.034077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.034142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.034334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.034398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.034611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.034673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.034900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.034966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.066 [2024-11-06 09:05:11.035216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.066 [2024-11-06 09:05:11.035281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.066 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.035505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.035569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.035788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.035870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.036109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.036175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.036382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.036447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.036702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.036778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.037041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.037108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.037359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.037422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.037709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.037774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.038057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.038124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.038358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.038423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.038638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.038705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.038964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.039030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.039327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.039391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.039683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.039748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.040065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.040130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.040395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.040459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.040704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.040769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.040995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.041063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.041355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.041420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.041643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.041707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.041955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.042023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.042253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.042317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.042566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.042630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.042887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.042952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.043201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.043268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.043522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.043584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.043780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.043858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.044109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.044174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.044424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.044486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.044765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.044829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.045106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.045170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.045392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.045457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.045748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.067 [2024-11-06 09:05:11.045811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.067 qpair failed and we were unable to recover it. 00:28:58.067 [2024-11-06 09:05:11.046120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.046185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.046411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.046475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.046708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.046771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.047069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.047134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.047375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.047439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.047637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.047700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.047919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.047985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.048253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.048317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.048597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.048662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.048874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.048942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.049193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.049257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.049495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.049570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.049786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.049866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.050091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.050156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.050394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.050460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.050721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.050788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.051053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.051118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.051335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.051400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.051653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.051716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.051958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.052024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.052274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.052338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.052549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.052616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.052897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.052985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.053235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.053301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.053547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.053610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.053819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.053918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.054112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.054175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.054461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.054524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.054750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.054814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.055096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.055159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.055400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.055465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.055715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.055779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.056076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.056140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.056349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.056414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.068 qpair failed and we were unable to recover it. 00:28:58.068 [2024-11-06 09:05:11.056633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.068 [2024-11-06 09:05:11.056697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.056992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.057057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.057282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.057345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.057601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.057667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.057921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.057987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.058230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.058294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.058542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.058606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.058861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.058925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.059142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.059205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.059461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.059526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.059736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.059802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.060111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.060176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.060428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.060491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.060780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.060861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.061077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.061141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.061405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.061469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.061697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.061760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.062032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.062114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.062400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.062463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.062714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.062778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.063034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.063102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.063355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.063420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.063671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.063734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.063987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.064052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.064261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.064324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.064532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.064596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.064884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.064950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.065164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.065228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.065441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.065504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.065743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.065805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.066089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.066153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.066405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.066469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.069 [2024-11-06 09:05:11.066732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.069 [2024-11-06 09:05:11.066794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.069 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.067051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.067115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.067317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.067381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.067624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.067686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.067924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.067989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.068192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.068258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.068498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.068562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.068824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.068904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.069145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.069209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.069453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.069517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.069717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.069782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.070067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.070132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.070405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.070469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.070758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.070822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.071128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.071191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.071445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.071509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.071742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.071806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.072057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.072120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.072409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.072473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.072672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.072736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.072954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.073019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.073274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.073339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.073595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.073659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.073863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.073930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.074154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.074218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.074504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.074577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.074771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.074852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.075098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.075164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.075408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.075471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.075669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.075736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.075957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.076025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.070 [2024-11-06 09:05:11.076245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.070 [2024-11-06 09:05:11.076308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.070 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.076558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.076621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.076859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.076925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.077178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.077242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.077476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.077540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.077756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.077820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.078067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.078130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.078389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.078453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.078669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.078734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.079013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.079079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.079292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.079359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.079594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.079658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.079944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.080009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.080249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.080315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.080514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.080577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.080821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.080899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.081142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.081210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.081467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.081531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.081759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.081824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.082102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.082167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.082399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.082461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.082762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.082828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.083090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.083156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.083344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.083407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.083653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.083717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.084007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.084072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.084293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.084355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.084559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.084622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.084823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.084906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.085136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.071 [2024-11-06 09:05:11.085199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.071 qpair failed and we were unable to recover it. 00:28:58.071 [2024-11-06 09:05:11.085482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.085545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.085801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.085902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.086153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.086220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.086461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.086524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.086777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.086862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.087121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.087185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.087433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.087495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.087743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.087808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.088027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.088094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.088359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.088424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.088666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.088730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.088975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.089040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.089330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.089394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.089652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.089717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.089986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.090050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.090315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.090378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.090576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.090641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.090896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.090963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.091218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.091282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.091531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.091596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.091853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.091917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.092174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.092238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.092515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.092578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.092824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.092902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.093166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.093229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.093430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.093494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.093744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.093808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.094109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.094173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.094383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.094447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.072 [2024-11-06 09:05:11.094733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.072 [2024-11-06 09:05:11.094797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.072 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.095012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.095079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.095346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.095419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.095658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.095725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.096025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.096091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.096380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.096443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.096696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.096760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.096989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.097056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.097252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.097315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.097514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.097577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.097768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.097846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.098044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.098109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.098389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.098452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.098690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.098753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.098997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.099062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.099318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.099382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.099641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.099703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.099958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.100024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.100276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.100340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.100584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.100648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.100829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.100911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.101159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.101222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.101468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.101531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.101818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.101897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.102125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.102188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.102444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.102506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.102707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.102770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.103008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.103072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.103327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.103390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.103695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.103759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.104058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.104122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.104369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.104432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.104670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.104734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.105007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.105072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.105326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.073 [2024-11-06 09:05:11.105390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.073 qpair failed and we were unable to recover it. 00:28:58.073 [2024-11-06 09:05:11.105607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.105669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.105915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.105982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.106244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.106307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.106596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.106659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.106899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.106964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.107228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.107293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.107592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.107655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.107940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.108018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.108303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.108367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.108606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.108669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.108912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.108977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.109174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.109239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.109458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.109521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.109725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.109788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.110089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.110152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.110344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.110412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.110657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.110723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.110935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.111003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.111194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.111259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.111504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.111567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.111847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.111912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.112157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.112221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.112440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.112505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.112741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.112803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.113115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.113180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.113477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.113541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.113747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.113809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.114077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.114141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.114408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.114471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.114721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.114784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.115035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.115099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.115347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.115410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.115670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.115734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.115999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.074 [2024-11-06 09:05:11.116064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.074 qpair failed and we were unable to recover it. 00:28:58.074 [2024-11-06 09:05:11.116283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.116345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.116561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.116624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.116882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.116947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.117196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.117259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.117441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.117505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.117788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.117867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.118119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.118183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.118382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.118449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.118735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.118799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.119070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.119134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.119389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.119454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.119739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.119802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.120069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.120132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.120379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.120452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.120673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.120736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.121000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.121065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.121321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.121384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.121597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.121661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.121866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.121932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.122143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.122206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.122412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.122475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.122713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.122779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.123060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.123124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.123314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.123381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.123611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.123675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.123952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.124018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.124265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.124328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.124628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.124692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.124979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.125044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.075 [2024-11-06 09:05:11.125284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.075 [2024-11-06 09:05:11.125346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.075 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.125550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.125613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.125870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.125935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.126117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.126180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.126461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.126524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.126769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.126845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.127096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.127159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.127393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.127457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.127662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.127725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.127988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.128051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.128312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.128375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.128607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.128670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.128896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.128959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.129224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.129287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.129549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.129612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.129863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.129929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.130195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.130259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.130511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.130577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.130855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.130919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.131106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.131169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.131392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.131457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.131700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.131766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.132074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.132138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.132393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.132458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.132739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.132812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.133149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.133213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.133460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.133523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.133778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.133857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.134091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.134154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.134432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.134494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.134707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.134770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.135038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.135102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.135389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.135451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.135644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.135706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.135965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.076 [2024-11-06 09:05:11.136032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.076 qpair failed and we were unable to recover it. 00:28:58.076 [2024-11-06 09:05:11.136320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.136382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.136618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.136680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.136923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.136990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.137293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.137355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.137594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.137657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.137950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.138015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.138275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.138338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.138634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.138697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.138927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.138992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.139234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.139298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.139548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.139613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.139865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.139931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.140193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.140259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.140466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.140532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.140814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.140908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.141158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.141221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.141509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.141573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.141780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.141864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.142115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.142180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.142395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.142458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.142690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.142753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.143062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.143126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.143405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.143468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.143677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.143740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.144044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.144107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.144333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.144395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.144644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.144707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.144925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.144989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.145268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.145331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.145572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.077 [2024-11-06 09:05:11.145645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.077 qpair failed and we were unable to recover it. 00:28:58.077 [2024-11-06 09:05:11.145851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.145918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.146206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.146270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.146482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.146547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.146759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.146822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.147077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.147141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.147376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.147439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.147620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.147683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.147892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.147957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.148179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.148241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.148524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.148587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.148819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.148898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.149140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.149204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.149480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.149544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.149767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.149845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.150137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.150200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.150449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.150512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.150793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.150872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.151077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.151142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.151404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.151467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.151744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.151808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.152100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.152164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.152391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.152454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.152712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.152774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.153046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.153110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.153385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.153449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.153696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.153760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.154044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.154108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.154381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.154444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.154691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.154757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.155001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.078 [2024-11-06 09:05:11.155066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.078 qpair failed and we were unable to recover it. 00:28:58.078 [2024-11-06 09:05:11.155315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.155378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.155655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.155717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.155958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.156024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.156318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.156666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.156728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.156982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.157046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.157281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.157343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.157538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.157601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.157859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.157925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.158180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.158256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.158470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.158533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.158778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.158857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.159035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.159098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.159376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.159438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.159686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.159749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.160049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.160114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.160402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.160465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.160711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.160776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.161057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.161122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.161362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.161426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.161707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.161771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.162045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.162110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.162305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.162368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.162593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.162660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.162944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.163010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.163247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.163310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.163525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.163589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.163861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.163926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.164175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.164238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.164477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.164543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.164768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.164850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.165098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.165162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.165366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.165433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.165652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.079 [2024-11-06 09:05:11.165716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.079 qpair failed and we were unable to recover it. 00:28:58.079 [2024-11-06 09:05:11.165965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.166030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.166290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.166353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.166618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.166681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.166887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.166952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.167195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.167268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.167549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.167614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.167902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.167967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.168213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.168277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.168538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.168601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.168909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.168974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.169221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.169285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.169527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.169592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.169848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.169916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.170174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.170237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.170483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.170547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.170748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.170823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.171146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.171210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.171497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.171562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.171769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.171848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.172061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.172126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.172335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.172401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.172618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.172680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.172863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.172928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.173217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.173282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.173522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.173584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.173774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.173856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.174071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.174138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.174421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.174484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.174741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.174804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.175038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.175102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.175356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.175420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.175631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.175695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.175945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.176010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.080 qpair failed and we were unable to recover it. 00:28:58.080 [2024-11-06 09:05:11.176217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.080 [2024-11-06 09:05:11.176280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.176558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.176622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.176864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.176929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.177158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.177222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.177441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.177503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.177743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.177805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.178009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.178076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.178315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.178379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.178579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.178642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.178890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.178955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.179194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.179258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.179466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.179529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.179733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.179796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.180031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.180096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.180274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.180336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.180616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.180679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.180939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.181014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.181261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.181325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.181564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.181630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.181865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.181931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.182184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.182248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.182451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.182516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.182743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.182819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.183088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.183152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.183395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.183459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.183749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.183813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.184126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.184190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.184478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.184542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.184766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.184829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.185031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.185096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.185380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.185443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.185737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.185802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.186107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.186172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.186428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.186491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.081 [2024-11-06 09:05:11.186772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.081 [2024-11-06 09:05:11.186854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.081 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.187069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.187132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.187430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.187494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.187735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.187800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.188077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.188142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.188344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.188409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.188617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.188681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.188910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.188976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.189224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.189288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.189537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.189601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.189909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.189973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.190215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.190281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.190508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.190573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.190787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.190885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.191148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.191211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.191504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.191569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.191786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.191867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.192098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.192161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.192378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.192442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.192681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.192744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.193050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.193114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.193321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.193384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.193631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.193694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.193990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.194055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.194300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.194363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.194605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.194672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.194926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.194991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.195276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.195340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.195638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.195711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.195966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.196030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.196200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.082 [2024-11-06 09:05:11.196264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.082 qpair failed and we were unable to recover it. 00:28:58.082 [2024-11-06 09:05:11.196510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.196572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.196772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.196856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.197115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.197179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.197466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.197528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.197806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.197893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.198138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.198202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.198437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.198499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.198699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.198764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.199046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.199113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.199379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.199442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.199685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.199747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.199997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.200062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.200301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.200363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.200656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.200718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.200964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.201029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.201278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.201342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.201604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.201666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.201946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.202010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.202308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.202372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.202617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.202681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.202923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.202988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.203238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.203303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.203549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.203613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.203815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.203893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.204170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.204234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.204482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.204545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.204792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.204868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.205103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.205166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.205404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.205467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.205703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.205769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.206031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.206094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.206348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.206411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.206599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.206664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.206923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.083 [2024-11-06 09:05:11.206987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.083 qpair failed and we were unable to recover it. 00:28:58.083 [2024-11-06 09:05:11.207272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.207335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.207556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.207618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.207884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.207949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.208203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.208278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.208563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.208627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.208872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.208936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.209198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.209261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.209521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.209584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.209861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.209926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.210173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.210236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.210472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.210536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.210769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.210845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.211047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.211111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.211393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.211457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.211654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.211716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.211994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.212059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.212342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.212406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.212702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.212765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.212998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.213062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.213351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.213415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.213651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.213713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.213965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.214029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.214269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.214331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.214580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.214642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.214900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.214966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.215170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.215234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.215430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.215493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.215748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.215811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.216085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.216149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.216355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.216420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.216726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.216790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.216990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.217055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.217263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.217328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.217561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.217623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.084 [2024-11-06 09:05:11.217860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.084 [2024-11-06 09:05:11.217924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.084 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.218204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.218268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.218514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.218577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.218857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.218922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.219164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.219227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.219514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.219577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.219848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.219914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.220163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.220226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.220475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.220538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.220748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.220821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.221076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.221141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.221392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.221458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.221724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.221787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.222035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.222098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.222338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.222401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.222608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.222672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.222893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.222958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.223176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.223239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.223523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.223586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.223806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.223885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.224126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.224189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.224384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.224451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.224703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.224765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.225107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.225171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.225358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.225421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.225642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.225704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.225959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.226023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.226319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.226381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.226677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.226740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.226992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.227058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.227298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.227360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.227645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.227708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.227949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.228015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.228251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.228313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.228596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.085 [2024-11-06 09:05:11.228659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.085 qpair failed and we were unable to recover it. 00:28:58.085 [2024-11-06 09:05:11.228911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.228976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.229273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.229336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.229544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.229607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.229779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.229856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.230083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.230145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.230359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.230421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.230669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.230733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.231040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.231104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.231286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.231349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.231598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.231660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.231899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.231963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.232221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.232283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.232566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.232629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.232895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.232960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.233203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.233275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.233514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.233578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.233818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.233900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.234134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.234196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.234469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.234532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.234771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.234847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.235062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.235125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.235386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.235450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.235738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.235801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.236031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.236094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.236337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.236401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.236702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.236765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.237030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.237093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.237343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.237408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.237658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.237724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.237982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.238045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.238289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.238352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.238618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.238682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.238910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.238977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.239234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.239297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.086 qpair failed and we were unable to recover it. 00:28:58.086 [2024-11-06 09:05:11.239497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.086 [2024-11-06 09:05:11.239563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.239776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.239857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.240154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.240217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.240469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.240531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.240809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.240888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.241137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.241199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.241445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.241508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.241737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.241801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.242073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.242136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.242385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.242448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.242660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.242726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.243029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.243094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.243335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.243399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.243642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.243706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.244001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.244065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.244363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.244426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.244634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.244697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.244939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.245003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.245294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.245358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.245574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.245638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.245897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.245961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.246269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.246332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.246626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.246689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.246892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.246957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.247169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.247232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.247432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.247496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.247680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.247742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.247942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.248009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.248294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.248358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.248552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.248618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.087 qpair failed and we were unable to recover it. 00:28:58.087 [2024-11-06 09:05:11.248866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.087 [2024-11-06 09:05:11.248931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.249186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.249248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.249502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.249567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.249763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.249828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.250184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.250248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.250537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.250600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.250860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.250925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.251114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.251177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.251451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.251513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.251722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.251784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.252059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.252122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.252426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.252489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.252730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.252792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.253058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.253122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.253398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.253462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.253743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.253804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.254086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.254150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.254437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.254510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.254760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.254824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.255132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.255195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.255436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.255502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.255802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.255884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.256141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.256208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.256467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.256532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.256774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.256853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.257140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.257203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.257428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.257492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.257748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.257810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.258097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.258162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.258443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.258507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.258749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.258813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.259063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.259127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.259417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.259480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.259731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.259795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.088 [2024-11-06 09:05:11.260100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.088 [2024-11-06 09:05:11.260164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.088 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.260387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.260453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.260700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.260764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.261084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.261149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.261443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.261506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.261789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.261871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.262136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.262200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.262488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.262551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.262789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.262872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.263114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.263177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.263451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.263514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.263757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.263819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.264106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.264169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.264425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.264488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.264740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.264802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.265010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.265074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.265293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.265356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.265593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.265656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.265933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.265998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.266281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.266344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.266575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.266638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.266887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.266950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.267188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.267252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.267494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.267567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.267823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.267902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.268102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.268165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.268411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.268475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.268720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.268785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.269052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.269117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.269361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.269425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.269717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.269779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.270053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.270118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.270380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.270444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.270748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.270811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.089 [2024-11-06 09:05:11.271078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.089 [2024-11-06 09:05:11.271143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.089 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.271431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.271495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.271780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.271859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.272070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.272134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.272421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.272484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.272748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.272811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.273072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.273136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.273373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.273437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.273623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.273686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.273946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.274011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.274234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.274297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.274586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.274650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.274866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.274931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.275148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.275212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.275486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.275549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.275791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.275867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.276178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.276242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.276523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.276586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.276799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.276877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.277141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.277204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.277417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.277481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.277719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.277782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.278057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.278120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.278398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.278462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.278707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.278772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.279074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.279139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.279388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.279453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.279694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.279758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.280016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.280080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.280322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.280397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.280646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.280709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.280954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.281020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.281304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.281369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.281609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.281674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.281913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.090 [2024-11-06 09:05:11.281980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.090 qpair failed and we were unable to recover it. 00:28:58.090 [2024-11-06 09:05:11.282240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.282305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.282601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.282665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.282919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.282985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.283236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.283301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.283552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.283616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.283868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.283933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.284154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.284218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.284494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.284556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.284777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.284858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.285067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.285131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.285363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.285428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.285712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.285776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.286009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.286073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.286354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.286417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.286625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.286690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.286955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.287020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.287258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.287322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.287541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.287604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.287859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.287924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.288156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.288219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.288513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.288576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.288828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.288909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.289202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.289265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.289514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.289578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.289820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.289913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.290154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.290218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.290501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.290564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.290769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.290847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.291134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.291197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.291405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.291467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.291680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.291743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.292010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.292075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.292319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.292381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.292660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.091 [2024-11-06 09:05:11.292722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.091 qpair failed and we were unable to recover it. 00:28:58.091 [2024-11-06 09:05:11.293029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.293103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.293353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.293416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.293671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.293734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.293977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.294041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.294237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.294300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.294539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.294605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.294854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.294920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.295201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.295265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.295502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.295565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.295807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.295885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.296105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.296170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.296357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.296421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.296708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.296771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.297045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.297109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.297397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.297461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.297707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.297770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.298089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.298154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.298449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.298511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.298717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.298783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.299045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.299110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.299335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.299398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.299629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.299692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.299912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.299977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.300222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.300283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.300531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.300594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.300862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.300928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.301143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.301207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.301511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.301574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.301878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.301943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.302227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.302291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.092 [2024-11-06 09:05:11.302502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.092 [2024-11-06 09:05:11.302565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.092 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.302852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.302917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.303212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.303276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.303514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.303577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.303847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.303912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.304211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.304276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.304560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.304622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.304879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.304944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.305145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.305211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.305458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.305520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.305731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.305805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.306080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.306146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.306434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.306498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.306743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.306808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.307122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.307186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.307472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.307535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.307813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.307889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.308138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.308201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.308457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.308519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.308801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.308878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.309095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.309159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.309407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.309469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.309724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.309787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.310049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.310113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.310316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.310381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.310646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.310708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.310962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.311027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.311274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.311337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.311589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.311653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.311890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.311955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.312231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.312293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.312545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.312608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.312858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.312923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.313130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.313192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.093 [2024-11-06 09:05:11.313394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.093 [2024-11-06 09:05:11.313456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.093 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.313708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.313773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.314014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.314079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.314347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.314410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.314612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.314675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.314932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.314996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.315290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.315353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.315581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.315645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.315852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.315919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.316156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.316220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.316416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.316481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.316719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.316783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.317035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.317099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.317347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.317410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.317642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.317707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.317921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.317987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.318233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.318307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.318553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.318618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.318822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.318899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.319197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.319259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.319546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.319609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.319859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.319924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.320178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.320240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.320440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.320507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.320787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.320863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.321163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.321226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.321513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.321576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.321819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.321909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.322150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.322212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.322496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.322558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.322815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.322894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.323179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.323242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.323529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.323591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.323876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.323941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.324150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.094 [2024-11-06 09:05:11.324213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.094 qpair failed and we were unable to recover it. 00:28:58.094 [2024-11-06 09:05:11.324462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.324526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.324775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.324852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.325054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.325118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.325396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.325458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.325743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.325806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.326128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.326193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.326475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.326536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.326824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.326904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.327209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.327273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.327559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.327622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.327920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.327985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.328277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.328339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.328621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.328684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.328897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.328961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.329198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.329260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.329510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.329575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.329795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.329875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.330164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.330227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.330523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.330585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.330858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.330922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.331207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.331269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.331519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.331592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.331870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.331935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.332147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.332210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.332490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.332553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.332846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.332913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.333197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.333263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.333461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.333527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.333746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.333809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.334062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.334133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.334352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.334431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.334736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.334802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.335032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.335095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.335382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.095 [2024-11-06 09:05:11.335447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.095 qpair failed and we were unable to recover it. 00:28:58.095 [2024-11-06 09:05:11.335656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.335721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.335960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.336027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.336252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.336318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.336540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.336603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.336900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.336965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.337259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.337323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.337523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.337590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.337856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.337921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.338177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.338240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.338430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.338492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.338737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.338800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.339101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.339165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.339407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.339469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.339680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.339746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.340014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.340080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.096 [2024-11-06 09:05:11.340319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.096 [2024-11-06 09:05:11.340381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.096 qpair failed and we were unable to recover it. 00:28:58.390 [2024-11-06 09:05:11.340610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.390 [2024-11-06 09:05:11.340673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.390 qpair failed and we were unable to recover it. 00:28:58.390 [2024-11-06 09:05:11.340956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.390 [2024-11-06 09:05:11.341020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.390 qpair failed and we were unable to recover it. 00:28:58.390 [2024-11-06 09:05:11.341264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.390 [2024-11-06 09:05:11.341329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.390 qpair failed and we were unable to recover it. 00:28:58.390 [2024-11-06 09:05:11.341584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.341647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.341880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.341946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.342162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.342229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.342488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.342551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.342759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.342822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.343053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.343117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.343394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.343457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.343689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.343751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.343990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.344064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.344270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.344332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.344553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.344618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.344855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.344919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.345128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.345194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.345443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.345506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.345784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.345859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.346069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.346131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.346423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.346486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.346751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.346813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.347058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.347123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.347409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.347472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.347720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.347784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.348015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.348079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.348383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.348447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.348665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.348728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.349010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.349074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.349356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.349419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.349700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.349762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.350017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.350082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.350361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.350425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.350706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.350768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.351037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.351101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.351337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.351399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.351660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.351723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.351938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.391 [2024-11-06 09:05:11.352003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.391 qpair failed and we were unable to recover it. 00:28:58.391 [2024-11-06 09:05:11.352193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.352255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.352509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.352575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.352851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.352917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.353168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.353232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.353526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.353589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.353805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.353920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.354218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.354282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.354532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.354596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.354857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.354922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.355151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.355214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.355459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.355521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.355772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.355852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.356140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.356202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.356449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.356511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.356807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.356899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.357143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.357206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.357479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.357541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.357787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.357869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.358121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.358185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.358436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.358498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.358756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.358819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.359083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.359146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.359386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.359448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.359731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.359794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.360109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.360173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.360455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.360518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.360750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.360815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.361093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.361159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.361464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.361527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.361765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.361828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.362111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.362175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.362440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.362504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.362751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.362814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.363117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.392 [2024-11-06 09:05:11.363179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.392 qpair failed and we were unable to recover it. 00:28:58.392 [2024-11-06 09:05:11.363435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.363497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.363747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.363811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.364075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.364138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.364361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.364424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.364712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.364775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.365051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.365130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.365395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.365459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.365664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.365728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.365957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.366021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.366265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.366327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.366579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.366641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.366934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.366998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.367280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.367343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.367625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.367688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.367907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.367971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.368224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.368286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.368510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.368571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.368771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.368851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.369105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.369168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.369423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.369485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.369733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.369806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.370131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.370195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.370377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.370440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.370681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.370745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.371054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.371117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.371325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.371387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.371636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.371700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.371920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.371985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.372180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.372243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.372442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.372507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.372751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.372815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.373027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.373089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.373326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.373388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.393 [2024-11-06 09:05:11.373685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.393 [2024-11-06 09:05:11.373748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.393 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.374059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.374123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.374382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.374443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.374683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.374745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.375010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.375074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.375358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.375420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.375663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.375725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.376022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.376086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.376330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.376392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.376611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.376674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.376962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.377026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.377265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.377327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.377574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.377636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.377886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.377950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.378285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.378381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.378640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.378706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.378960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.379027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.379205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.379268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.379510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.379575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.379822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.379903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.380151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.380215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.380466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.380532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.380820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.380897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.381134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.381198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.381437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.381500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.381777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.381854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.382138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.382202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.382439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.382504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.382740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.382803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.383029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.383094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.383351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.383414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.383691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.383753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.384018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.384083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.384370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.384434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.384664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.394 [2024-11-06 09:05:11.384726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.394 qpair failed and we were unable to recover it. 00:28:58.394 [2024-11-06 09:05:11.384979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.385043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.385274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.385337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.385555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.385619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.385903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.385967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.386256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.386319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.386573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.386635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.386925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.387001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.387265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.387327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.387571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.387634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.387916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.387981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.388198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.388260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.388502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.388564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.388800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.388876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.389105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.389168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.389374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.389439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.389728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.389792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.390024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.390087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.390350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.390412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.390704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.390768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.391073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.391137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.391395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.391458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.391694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.391756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.391993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.392056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.392361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.392423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.392664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.392726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.392982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.393049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.393310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.393373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.393621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.393683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.393920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.393985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.394224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.394287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.395 qpair failed and we were unable to recover it. 00:28:58.395 [2024-11-06 09:05:11.394578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.395 [2024-11-06 09:05:11.394639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.394918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.394983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.395267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.395331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.395624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.395697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.395994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.396059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.396320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.396383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.396617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.396679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.396972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.397036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.397264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.397327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.397585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.397647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.397858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.397933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.398141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.398205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.398422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.398485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.398710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.398775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.399010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.399075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.399328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.399393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.399653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.399718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.399983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.400047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.400251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.400314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.400546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.400611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.400857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.400922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.401214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.401277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 939076 Killed "${NVMF_APP[@]}" "$@" 00:28:58.396 [2024-11-06 09:05:11.401576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.401641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.401901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.401966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:58.396 [2024-11-06 09:05:11.402223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.402286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:58.396 [2024-11-06 09:05:11.402498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.402562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:58.396 [2024-11-06 09:05:11.402853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.402917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:58.396 [2024-11-06 09:05:11.403180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.403244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.396 [2024-11-06 09:05:11.403528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.403590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.403799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.403875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.404060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.396 [2024-11-06 09:05:11.404124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.396 qpair failed and we were unable to recover it. 00:28:58.396 [2024-11-06 09:05:11.404406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.404469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.404717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.404781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.404992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.405026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.405146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.405178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.405282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.405315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.405452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.405485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.405698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.405760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.406008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.406079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.406261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.406325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.406517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.406574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.406711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.406750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.406945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.406979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.407145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.407178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.407283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.407316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.407422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.407455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.407596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.407628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.407783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.407820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.408016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.408050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.408237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.408273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.408403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.408467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.408754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.408829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.409026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.409059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.409241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.409303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=939517 00:28:58.397 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:58.397 [2024-11-06 09:05:11.409586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.409654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 939517 00:28:58.397 [2024-11-06 09:05:11.409935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.409968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 939517 ']' 00:28:58.397 [2024-11-06 09:05:11.410081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.410148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.397 [2024-11-06 09:05:11.410452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.410543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.397 [2024-11-06 09:05:11.410869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.397 [2024-11-06 09:05:11.410940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.397 [2024-11-06 09:05:11.411080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.411146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.397 [2024-11-06 09:05:11.411312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.411357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.397 [2024-11-06 09:05:11.411626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.397 [2024-11-06 09:05:11.411693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.397 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.411969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.412008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.412130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.412164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.412286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.412321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.412470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.412504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.412638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.412671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.412812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.412854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.412973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.413007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.413128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.413163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.413276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.413328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.413408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.413434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.413539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.413564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.413669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.413695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.413816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.413852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.413930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.413955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.414044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.414069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.414185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.414215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.414295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.414321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.414439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.414465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.414550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.414576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.414687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.414713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.414828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.414868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.414953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.414978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.415058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.415084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.415197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.415222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.415309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.415334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.415439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.415465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.415568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.415593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.415673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.415699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.415813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.415847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.415980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.416005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.416094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.416142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.416251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.416284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.416413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.416448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.416571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.398 [2024-11-06 09:05:11.416605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.398 qpair failed and we were unable to recover it. 00:28:58.398 [2024-11-06 09:05:11.416738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.416771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.416899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.416925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.417001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.417026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.417139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.417164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.417255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.417282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.417397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.417423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.417510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.417535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.417619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.417644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.417735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.417765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.417852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.417880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.417963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.417987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.418064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.418089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.418200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.418224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.418315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.418341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.418424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.418449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.418526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.418551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.418627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.418652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.418736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.418761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.418957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.418985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.419097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.419121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.419233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.419257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.419349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.419374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.419464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.419489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.419600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.419624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.419700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.419725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.419825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.419880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.420007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.420035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.420143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.420170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.420258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.420284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.420370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.420397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.420484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.420511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.420619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.420647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.420743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.420769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.420858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.420884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.399 qpair failed and we were unable to recover it. 00:28:58.399 [2024-11-06 09:05:11.420962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.399 [2024-11-06 09:05:11.420986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.421068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.421098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.421188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.421212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.421322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.421347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.421428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.421452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.421532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.421556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.421693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.421717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.421801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.421826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.421919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.421945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.422040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.422065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.422147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.422173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.422254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.422279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.422364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.422390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.422469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.422494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.422580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.422604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.422751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.422775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.422864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.422890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.422976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.423001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.423087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.423117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.423202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.423229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.423337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.423363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.423480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.423505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.423585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.423611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.423727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.423753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.423840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.423868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.423954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.423981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.424062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.424088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.424198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.424225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.424312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.424343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.424464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.424491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.424612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.424639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.424725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.424751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.424826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.424859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.424941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.400 [2024-11-06 09:05:11.424968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.400 qpair failed and we were unable to recover it. 00:28:58.400 [2024-11-06 09:05:11.425042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.425068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.425182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.425209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.425291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.425318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.425409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.425435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.425525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.425552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.425651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.425679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.425776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.425801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.425897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.425923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.426013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.426038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.426118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.426144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.426256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.426281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.426363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.426387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.426485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.426510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.426597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.426621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.426706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.426731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.426818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.426851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.426963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.426989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.427074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.427099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.427202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.427229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.427312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.427337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.427443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.427469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.427613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.427643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.427759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.427784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.427869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.427895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.427984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.428009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.428083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.428107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.428219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.428245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.428327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.428351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.428465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.428491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.428605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.428630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.428704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.428729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.401 qpair failed and we were unable to recover it. 00:28:58.401 [2024-11-06 09:05:11.428812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.401 [2024-11-06 09:05:11.428845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.428940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.428964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.429042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.429067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.429150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.429173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.429271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.429295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.429386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.429411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.429620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.429645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.429718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.429742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.429836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.429862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.429951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.429976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.430069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.430098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.430194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.430220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.430309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.430336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.430480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.430506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.430592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.430619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.430705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.430732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.430821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.430859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.430954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.430983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.431081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.431111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.431189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.431214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.431299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.431323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.431413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.431437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.431513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.431536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.431650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.431675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.431774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.431799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.431957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.431984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.432090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.432114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.432195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.432220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.432301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.432332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.432430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.432455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.432544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.432570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.432660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.432685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.432769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.432794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.432891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.432924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.433007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.402 [2024-11-06 09:05:11.433032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.402 qpair failed and we were unable to recover it. 00:28:58.402 [2024-11-06 09:05:11.433118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.433142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.433225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.433249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.433342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.433367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.433481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.433507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.433622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.433648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.433752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.433778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.433870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.433895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.433981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.434006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.434096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.434121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.434205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.434234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.434337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.434384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.434543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.434569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.434755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.434789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.434947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.434973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.435087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.435131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.435245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.435277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.435449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.435495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.435713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.435747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.435915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.435941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.436023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.436048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.436162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.436186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.436329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.436355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.436486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.436539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.436755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.436787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.436939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.436964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.437044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.437069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.437148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.437172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.437303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.437338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.437453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.437486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.437639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.437672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.437823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.437865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.437998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.438023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.438106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.438131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.438215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.438239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.403 qpair failed and we were unable to recover it. 00:28:58.403 [2024-11-06 09:05:11.438369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.403 [2024-11-06 09:05:11.438401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.439769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.439801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.439929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.439957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.440078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.440104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.440238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.440281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.440370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.440396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.440527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.440552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.440636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.440663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.440781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.440806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.440940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.440966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.441058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.441083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.441226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.441252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.441334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.441359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.441480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.441507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.441638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.441664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.441771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.441798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.441943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.441969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.442062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.442087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.442170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.442202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.442330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.442356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.442494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.442519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.442625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.442650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.442734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.442759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.442879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.442904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.442992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.443018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.443107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.443132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.443272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.443298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.443381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.443406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.443579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.443605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.443715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.443741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.443839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.443880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.443973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.443999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.444085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.444110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.444193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.444218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.444324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.404 [2024-11-06 09:05:11.444352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.404 qpair failed and we were unable to recover it. 00:28:58.404 [2024-11-06 09:05:11.444442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.444468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.444552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.444578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.444659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.444684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.444767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.444793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.444907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.444932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.445022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.445047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.445133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.445158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.445246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.445270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.445355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.445384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.445491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.445516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.446231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.446277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.446372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.446399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.446512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.446537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.446651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.446676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.446765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.446791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.446896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.446923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.447010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.447035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.447165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.447190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.447273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.447298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.447384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.447410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.447528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.447553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.447658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.447684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.447823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.447863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.447951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.447976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.448066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.448091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.448207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.448231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.448351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.448377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.448504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.448528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.448867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.448896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.448989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.449015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.449098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.449135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.449506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.449535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.449681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.449708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.449801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.405 [2024-11-06 09:05:11.449827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.405 qpair failed and we were unable to recover it. 00:28:58.405 [2024-11-06 09:05:11.449927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.449952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.450042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.450071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.450169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.450194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.450333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.450358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.450433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.450458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.450571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.450595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.450713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.450738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.450827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.450860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.450937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.450963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.451050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.451074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.451183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.451208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.451332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.451358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.451450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.451476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.451576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.451601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.451714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.451739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.451878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.451904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.451982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.452007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.452082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.452107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.452208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.452233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.452327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.452353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.452454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.452479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.452580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.452605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.452694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.452718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.452803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.452828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.452922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.452947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.453033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.453059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.453148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.453173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.453280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.453306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.453421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.453460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.453572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.453598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.453684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.406 [2024-11-06 09:05:11.453708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.406 qpair failed and we were unable to recover it. 00:28:58.406 [2024-11-06 09:05:11.453787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.453813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.453909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.453934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.454026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.454051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.454171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.454196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.454312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.454337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.454448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.454472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.454618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.454644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.454744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.454769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.454856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.454882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.454966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.454991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.455080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.455105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.455210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.455236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.455373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.455398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.455513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.455538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.455620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.455657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.455735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.455760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.455881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.455907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.455985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.456010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.456099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.456124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.456213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.456237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.456385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.456411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.456508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.456533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.456641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.456667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.456776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.456800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.456906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.456932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.457020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.457045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.457185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.457211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.457293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.457318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.457393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.457416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.457503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.457529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.457604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.457629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.457704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.457739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.457850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.457875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.457957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.457982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.407 [2024-11-06 09:05:11.458058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.407 [2024-11-06 09:05:11.458083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.407 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.458203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.458227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.458365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.458390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.458489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.458514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.458602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.458630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.458770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.458795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.458911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.458937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.459036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.459062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.459184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.459209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.459302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.459326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.459415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.459439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.459551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.459575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.459692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.459717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.459802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.459826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.459928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.459953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.460037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.460062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.460158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.460183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.460288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.460324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.460448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.460484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.460573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.460599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.460693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.460719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.460812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.460853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.460943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.460969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.461055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.461081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.461189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.461254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.461367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.461403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.461541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.461575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.461674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.461707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.461815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.461871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.461957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.461984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.462097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.462144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.462256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.462283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.462453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.462504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.462587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.462613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.462723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.408 [2024-11-06 09:05:11.462749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.408 qpair failed and we were unable to recover it. 00:28:58.408 [2024-11-06 09:05:11.462850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.462876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.462965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.462990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.463075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.463101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.463201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.463225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.463316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.463344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.463439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.463466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.463576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.463602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.463682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.463708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.463799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.463825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.463915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.463940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.464032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.464058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.464209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.464235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.464356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.464383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.464444] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:28:58.409 [2024-11-06 09:05:11.464493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.464519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 [2024-11-06 09:05:11.464523] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.464659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.464683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.464776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.464801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.464898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.464922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.465003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.465027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.465107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.465140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.465215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.465238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.465328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.465352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.465493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.465520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.465612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.465636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.465733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.465758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.465857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.465881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.465981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.466014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.466119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.466166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.466591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.466625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.466760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.466788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.466924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.466953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.467040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.467067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.467208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.467242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.467365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.409 [2024-11-06 09:05:11.467413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.409 qpair failed and we were unable to recover it. 00:28:58.409 [2024-11-06 09:05:11.467586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.467647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.467772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.467800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.467925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.467969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.468078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.468125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.468224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.468268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.468385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.468412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.468511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.468539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.468626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.468652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.468754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.468793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.468897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.468926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.469019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.469046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.469138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.469182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.469345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.469371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.469461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.469488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.469571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.469597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.469705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.469731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.469844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.469880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.469974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.470000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.470093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.470119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.470265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.470306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.470444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.470475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.470663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.470707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.470787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.470813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.470929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.470957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.471059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.471098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.471203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.471229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.471312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.471339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.471430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.471455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.471535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.471561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.471647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.471675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.471771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.471797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.471913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.471954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.472051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.472080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.472178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.472227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.410 qpair failed and we were unable to recover it. 00:28:58.410 [2024-11-06 09:05:11.472347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.410 [2024-11-06 09:05:11.472392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.472511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.472559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.472689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.472714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.472796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.472823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.472921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.472948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.473042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.473067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.473152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.473177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.473292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.473320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.473426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.473478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.473572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.473599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.473687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.473712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.473841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.473867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.473959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.473985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.474064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.474090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.474200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.474224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.474305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.474332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.474426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.474452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.474564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.474590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.474677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.474704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.474827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.474859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.474943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.474969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.475074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.475123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.475249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.475273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.475387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.475437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.475569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.475603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.475722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.475770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.475876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.475903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.475986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.476012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.476100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.476126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.476268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.476334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.411 [2024-11-06 09:05:11.476457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.411 [2024-11-06 09:05:11.476486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.411 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.476596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.476621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.476716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.476742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.476845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.476872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.476949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.476975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.477053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.477085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.477189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.477215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.477331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.477357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.477470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.477496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.477623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.477663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.477757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.477786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.477881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.477910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.477994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.478019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.478104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.478130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.478223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.478249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.478359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.478385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.478463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.478490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.478576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.478604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.478683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.478711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.478807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.478844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.478929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.478956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.479046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.479073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.479165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.479191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.479282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.479309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.479428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.479455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.479553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.479591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.479687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.479715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.479845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.479872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.479957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.479984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.480062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.480087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.480168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.480194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.480269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.480303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.480434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.480460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.480581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.480607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.480726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.412 [2024-11-06 09:05:11.480764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.412 qpair failed and we were unable to recover it. 00:28:58.412 [2024-11-06 09:05:11.480867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.480896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.480992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.481019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.481104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.481130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.481270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.481296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.481378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.481407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.481492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.481518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.481634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.481663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.481748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.481773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.481883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.481910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.481997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.482023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.482109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.482139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.482217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.482243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.482329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.482356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.482498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.482525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.482615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.482643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.482719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.482746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.482843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.482870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.482964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.482989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.483079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.483106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.483201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.483226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.483315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.483341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.483496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.483521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.483613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.483638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.483724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.483749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.483846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.483875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.483977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.484004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.484091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.484118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.484207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.484234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.484315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.484346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.484461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.484487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.484575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.484603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.484693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.484718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.484804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.484845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.484939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.484964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.413 qpair failed and we were unable to recover it. 00:28:58.413 [2024-11-06 09:05:11.485052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.413 [2024-11-06 09:05:11.485077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.485197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.485222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.485313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.485339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.485428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.485459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.485573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.485599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.485714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.485742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.485847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.485894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.485999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.486027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.486118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.486144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.486240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.486267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.486358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.486384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.486477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.486503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.486624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.486650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.486757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.486784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.486890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.486918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.487000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.487026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.487109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.487134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.487228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.487254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.487361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.487387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.487474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.487500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.487586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.487613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.487703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.487731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.487854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.487894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.487989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.488015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.488100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.488125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.488203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.488228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.488308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.488334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.488424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.488450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.488558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.488583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.488693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.488719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.488810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.488849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.488932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.488959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.489044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.489069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.489175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.489200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.489305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.414 [2024-11-06 09:05:11.489339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.414 qpair failed and we were unable to recover it. 00:28:58.414 [2024-11-06 09:05:11.489435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.489460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.489588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.489615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.489702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.489728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.489823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.489871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.489967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.489995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.490074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.490100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.490190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.490218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.490308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.490335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.490495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.490535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.490671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.490698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.490813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.490863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.490952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.490978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.491068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.491095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.491196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.491222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.491316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.491342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.491449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.491488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.491582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.491611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.491695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.491723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.491809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.491854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.491944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.491969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.492056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.492081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.492166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.492192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.492306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.492338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.492430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.492456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.492550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.492576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.492666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.492692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.492801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.492826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.492926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.492952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.493033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.493058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.493148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.493174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.493282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.493310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.493403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.493428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.493523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.493561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.415 [2024-11-06 09:05:11.493656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.415 [2024-11-06 09:05:11.493683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.415 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.493767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.493793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.493894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.493921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.494009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.494033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.494107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.494140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.494254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.494278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.494388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.494413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.494504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.494531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.494633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.494660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.494758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.494786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.494878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.494904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.494992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.495018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.495102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.495135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.495241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.495276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.495372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.495398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.495494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.495519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.495607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.495633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.495709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.495733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.495819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.495853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.495944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.495970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.496057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.496082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.496177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.496202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.496285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.496311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.496421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.496446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.496531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.496556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.496640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.496675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.496861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.496900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.496999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.497028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.497148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.497174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.497264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.497295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.497384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.497410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.416 qpair failed and we were unable to recover it. 00:28:58.416 [2024-11-06 09:05:11.497495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.416 [2024-11-06 09:05:11.497524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.497599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.497625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.497728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.497767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.497874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.497902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.497992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.498018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.498097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.498123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.498198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.498224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.498306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.498341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.498421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.498457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.498565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.498591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.498678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.498703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.498785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.498811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.498928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.498954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.499041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.499066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.499189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.499215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.499306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.499336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.499452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.499479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.499567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.499596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.499706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.499733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.499852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.499879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.499957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.499984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.500069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.500096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.500184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.500213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.500299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.500331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.500421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.500446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.500537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.500564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.500660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.500687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.500777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.500802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.500903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.500930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.501020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.501046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.501148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.501174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.501258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.501285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.501385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.501412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.501501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.417 [2024-11-06 09:05:11.501539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.417 qpair failed and we were unable to recover it. 00:28:58.417 [2024-11-06 09:05:11.501624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.501651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.501732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.501757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.501871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.501897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.501984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.502009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.502087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.502113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.502212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.502240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.502361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.502388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.502466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.502494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.502586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.502613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.502701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.502726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.502801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.502827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.502912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.502938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.503026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.503050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.503145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.503170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.503265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.503289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.503402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.503427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.503516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.503542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.503639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.503677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.503778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.503807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.503900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.503926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.504020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.504047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.504137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.504164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.504244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.504271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.504358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.504383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.504500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.504529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.504631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.504670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.504764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.504791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.504881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.504906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.504999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.505024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.505105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.505130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.505216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.505244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.505333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.505362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.505481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.505507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.505591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.505618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.418 [2024-11-06 09:05:11.505700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.418 [2024-11-06 09:05:11.505725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.418 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.505807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.505842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.505939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.505965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.506049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.506073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.506160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.506184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.506296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.506320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.506399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.506427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.506515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.506546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.506647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.506683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.506763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.506791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.506895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.506923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.507017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.507044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.507135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.507161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.507246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.507274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.507364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.507398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.507542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.507569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.507664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.507692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.507776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.507801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.507897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.507927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.508016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.508042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.508125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.508152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.508229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.508261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.508368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.508394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.508485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.508511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.508591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.508623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.508714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.508740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.508842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.508867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.508951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.508976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.509060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.509084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.509166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.509191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.509302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.509327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.509426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.509450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.509537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.509562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.509645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.509672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.509762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.419 [2024-11-06 09:05:11.509787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.419 qpair failed and we were unable to recover it. 00:28:58.419 [2024-11-06 09:05:11.509877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.509903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.510017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.510043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.510143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.510170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.510280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.510307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.510400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.510428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.510528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.510552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.510662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.510688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.510772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.510797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.510906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.510931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.511047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.511071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.511153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.511178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.511288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.511312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.511417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.511457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.511565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.511594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.511714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.511741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.511844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.511872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.511961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.511990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.512072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.512098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.512181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.512207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.512348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.512374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.512447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.512472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.512557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.512583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.512701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.512731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.512853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.512882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.512973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.513001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.513086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.513111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.513223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.513250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.513338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.513365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.513492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.513518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.513631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.513655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.514404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.514434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.514595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.514622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.514742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.514770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.514876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.420 [2024-11-06 09:05:11.514902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.420 qpair failed and we were unable to recover it. 00:28:58.420 [2024-11-06 09:05:11.515019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.515046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.515129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.515155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.515268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.515295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.515414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.515440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.515520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.515546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.515642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.515681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.515779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.515806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.515907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.515934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.516013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.516038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.516124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.516157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.516243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.516270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.516355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.516382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.516472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.516498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.516589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.516615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.516704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.516729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.516810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.516844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.516929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.516956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.517055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.517081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.517171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.517199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.517310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.517336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.517468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.517497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.517594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.517621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.517704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.517735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.517859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.517886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.517979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.518006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.518101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.518127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.518224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.518252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.518340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.518366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.518480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.518506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.518616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.421 [2024-11-06 09:05:11.518641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.421 qpair failed and we were unable to recover it. 00:28:58.421 [2024-11-06 09:05:11.518732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.518771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.518917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.518956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.519059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.519086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.519218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.519244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.519323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.519350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.519444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.519470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.519584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.519609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.519689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.519714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.519795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.519820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.519913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.519939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.520027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.520053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.520170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.520196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.520309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.520334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.520451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.520477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.520563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.520591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.520685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.520710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.520787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.520813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.520911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.520939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.521032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.521060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.521186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.521228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.521319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.521346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.521469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.521494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.521577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.521603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.521750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.521775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.521896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.521924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.522005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.522031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.522109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.522136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.522250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.522277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.522370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.522396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.522487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.522514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.522625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.522650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.522733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.522759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.522866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.522897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.523003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.523028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.422 qpair failed and we were unable to recover it. 00:28:58.422 [2024-11-06 09:05:11.523111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.422 [2024-11-06 09:05:11.523137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.523248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.523273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.523354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.523379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.523464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.523489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.523597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.523623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.523729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.523774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.523881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.523910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.524002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.524027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.524111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.524136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.524243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.524269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.524371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.524397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.524501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.524527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.524631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.524676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.524786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.524815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.524934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.524960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.525047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.525073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.525200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.525226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.525323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.525351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.525475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.525502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.525590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.525616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.525697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.525723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.525805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.525840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.525927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.525952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.526060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.526085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.526195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.526224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.526312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.526339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.526433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.526460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.526546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.526571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.526651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.526677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.526764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.526790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.526900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.526928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.527066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.527105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.527197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.527223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.527330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.527355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.423 [2024-11-06 09:05:11.527434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.423 [2024-11-06 09:05:11.527458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.423 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.527566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.527591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.527675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.527701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.527844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.527870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.527949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.527975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.528073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.528099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.528199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.528225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.528360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.528386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.528471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.528497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.528605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.528629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.528721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.528761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.528870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.528898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.529017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.529045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.529194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.529221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.529302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.529328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.529448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.529474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.529560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.529587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.529677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.529703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.529783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.529809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.529930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.529958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.530071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.530095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.530178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.530202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.530318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.530343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.530424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.530452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.530570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.530596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.530700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.530727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.530810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.530843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.530957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.530983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.531066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.531092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.531177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.531203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.531291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.531317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.531390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.531420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.531520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.531558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.531677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.531703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.531796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.424 [2024-11-06 09:05:11.531825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.424 qpair failed and we were unable to recover it. 00:28:58.424 [2024-11-06 09:05:11.531955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.531981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.532122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.532148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.532285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.532311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.532416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.532443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.532531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.532556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.532671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.532698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.532802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.532826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.532928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.532953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.533068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.533094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.533196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.533221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.533338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.533363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.533443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.533468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.533603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.533628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.533710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.533735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.533850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.533875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.533963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.533989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.534096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.534120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.534240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.534268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.534402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.534428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.534539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.534565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.534653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.534679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.534783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.534809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.534927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.534953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.535090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.535120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.535230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.535256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.535340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.535367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.535460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.535486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.535606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.535646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.535756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.535794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.535889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.535917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.536007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.536034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.536118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.536144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.536228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.536254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.536349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.425 [2024-11-06 09:05:11.536375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.425 qpair failed and we were unable to recover it. 00:28:58.425 [2024-11-06 09:05:11.536485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.536510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.536593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.536620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.536736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.536763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.536887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.536914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.537025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.537051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.537138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.537163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.537251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.537276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.537356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.537385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.537501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.537527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.537626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.537665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.537773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.537799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.537884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.537912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.538006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.538033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.538158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.538184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.538296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.538322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.538401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.538428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.538515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.538541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.538651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.538677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.538764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.538789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.538908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.538934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.539044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.539069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.539161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.539186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.539269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.539294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.539404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.539430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.539507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.539534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.539635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.539674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.539762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.539788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.539904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.539930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.540022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.540048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.540122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.540148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.426 [2024-11-06 09:05:11.540295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.426 [2024-11-06 09:05:11.540322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.426 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.540410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.540437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.540527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.540557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.540646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.540671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.540789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.540817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.540909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.540936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.541018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.541044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.541156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.541181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.541269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.541297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.541410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.541435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.541552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.541579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.541661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.541686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.541772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.541797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.541892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.541919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.542039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.542065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.542155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.542180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.542264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.542289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.542367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.542392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.542469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.542494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.542575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.542600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.542709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.542733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.542813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.542848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.542933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.542960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.543042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.543068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.543197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.543223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.543304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.543329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.543415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.543448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.543562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.543588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.543667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.543692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.543770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.543796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.543887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.543915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.544000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.544026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.544104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.544129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.544235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.544260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.544367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.427 [2024-11-06 09:05:11.544394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.427 qpair failed and we were unable to recover it. 00:28:58.427 [2024-11-06 09:05:11.544507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.544532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.544672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.544698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.544778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.544804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.544927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.544956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.545041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.545067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.545161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.545187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.545304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.545329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.545406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.545432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.545520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.545547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.545661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.545687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.545807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.545840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.545956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.545982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.546094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.546119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.546223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.546248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.546331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.546357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.546478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.546504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.428 [2024-11-06 09:05:11.546499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.546590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.546618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.546733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.546760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.546863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.546890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.546998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.547023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.547178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.547203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.547280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.547307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.547420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.547445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.547529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.547556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.547642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.547668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.547753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.547780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.547896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.547922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.548033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.548059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.548164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.548190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.548276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.548301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.548390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.548418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.548514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.548545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.548631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.548657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.548767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.428 [2024-11-06 09:05:11.548792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.428 qpair failed and we were unable to recover it. 00:28:58.428 [2024-11-06 09:05:11.548936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.548963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.549076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.549102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.549215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.549242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.549351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.549377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.549484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.549511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.549625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.549650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.549778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.549817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.549943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.549970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.550057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.550083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.550220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.550245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.550326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.550351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.550468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.550494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.550609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.550636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.550726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.550755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.550854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.550881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.550999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.551024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.551107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.551133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.551242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.551267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.551353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.551381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.551504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.551531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.551629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.551668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.551756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.551782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.551898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.551925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.552003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.552029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.552121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.552147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.552274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.552299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.552435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.552462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.552554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.552582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.552662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.552688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.552766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.552792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.552900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.552926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.553012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.553037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.553160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.553186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.553266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.429 [2024-11-06 09:05:11.553291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.429 qpair failed and we were unable to recover it. 00:28:58.429 [2024-11-06 09:05:11.553400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.553427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.553536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.553563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.553658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.553685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.553786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.553844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.553935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.553961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.554047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.554073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.554164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.554190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.554299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.554325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.554407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.554433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.554533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.554572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.554692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.554723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.554854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.554882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.554980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.555005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.555124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.555150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.555286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.555311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.555419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.555444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.555531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.555560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.555695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.555723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.555821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.555862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.555958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.555984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.556072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.556098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.556187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.556213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.556304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.556329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.556409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.556435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.556550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.556576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.556655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.556682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.556765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.556794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.556886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.556913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.557003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.557030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.557112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.557147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.557265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.557294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.557406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.557434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.557528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.557555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.557630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.557655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.430 [2024-11-06 09:05:11.557738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.430 [2024-11-06 09:05:11.557764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.430 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.557855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.557880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.558021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.558046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.558154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.558179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.558265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.558291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.558401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.558426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.558517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.558545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.558628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.558656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.558753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.558781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.558909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.558936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.559026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.559051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.559148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.559173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.559283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.559308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.559386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.559411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.559500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.559528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.559647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.559674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.559790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.559816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.559936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.559962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.560050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.560078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.560166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.560192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.560302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.560328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.560421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.560447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.560532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.560560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.560646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.560675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.560791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.560819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.560924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.560950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.561042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.561069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.561171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.561197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.561288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.561314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.561432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.561457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.561568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.561594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.561674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.561699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.561782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.561809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.561903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.431 [2024-11-06 09:05:11.561928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.431 qpair failed and we were unable to recover it. 00:28:58.431 [2024-11-06 09:05:11.562009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.562035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.562122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.562146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.562264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.562288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.562377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.562404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.562516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.562544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.562668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.562694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.562778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.562804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.562914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.562941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.563059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.563087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.563176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.563202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.563322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.563348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.563440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.563466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.563549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.563576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.563670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.563695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.563800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.563827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.563925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.563951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.564041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.564068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.564181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.564208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.564348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.564376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.564463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.564488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.564584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.564622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.564716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.564743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.564861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.564888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.564975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.565000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.565090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.565117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.565232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.565258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.565372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.565399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.565478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.565503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.565592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.565631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.565758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.565792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.565891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.565917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.566032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.566057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.566212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.566240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.566375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.566403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.566488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.566516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.566598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.566623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.566724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.566763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.432 qpair failed and we were unable to recover it. 00:28:58.432 [2024-11-06 09:05:11.566896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.432 [2024-11-06 09:05:11.566923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.567006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.567031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.567157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.567188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.567306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.567334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.567422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.567450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.567543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.567570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.567658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.567684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.567797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.567823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.567936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.567962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.568079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.568105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.568188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.568214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.568322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.568347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.568458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.568484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.568589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.568615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.568704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.568732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.568865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.568905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.569066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.569109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.569212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.569241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.569353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.569384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.569501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.569527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.569616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.569643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.569732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.569759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.569867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.569906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.570023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.570050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.570175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.570200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.570280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.570305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.570377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.570403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.570487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.570512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.570611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.570651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.570775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.570802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.570918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.570946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.571035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.571062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.571151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.571181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.571301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.571326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.433 [2024-11-06 09:05:11.571446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.433 [2024-11-06 09:05:11.571474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.433 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.571603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.571632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.571714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.571744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.571827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.571859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.571942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.571969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.572080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.572106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.572189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.572213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.572291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.572316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.572431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.572459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.572547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.572575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.572685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.572711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.572799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.572825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.572921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.572947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.573060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.573088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.573226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.573252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.573335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.573360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.573451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.573477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.573589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.573615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.573712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.573740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.573866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.573906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.574003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.574031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.574119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.574144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.574289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.574317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.574459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.574484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.574574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.574600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.574723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.574755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.574898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.574937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.575034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.575060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.575174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.575199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.575283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.575308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.575449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.575473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.575587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.575614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.575698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.575723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.575806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.575839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.575933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.575960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.576042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.576068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.576179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.576204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.576310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.576336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.576420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.576447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.434 qpair failed and we were unable to recover it. 00:28:58.434 [2024-11-06 09:05:11.576554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.434 [2024-11-06 09:05:11.576579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.576665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.576693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.576771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.576796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.576922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.576949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.577041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.577066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.577149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.577174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.577309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.577334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.577437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.577463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.577599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.577638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.577761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.577788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.577882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.577909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.577992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.578018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.578127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.578153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.578269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.578296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.578389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.578414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.578491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.578517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.578592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.578619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.578730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.578759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.578879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.578908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.579025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.579052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.579166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.579192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.579278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.579304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.579378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.579403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.579520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.579546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.579623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.579648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.579729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.579754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.579867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.579898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.580007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.580033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.580118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.580145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.580233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.580261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.580375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.580401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.580545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.580571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.580662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.580691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.580818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.580864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.580980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.581007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.581131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.581157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.581240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.581269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.435 qpair failed and we were unable to recover it. 00:28:58.435 [2024-11-06 09:05:11.581360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.435 [2024-11-06 09:05:11.581385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.581463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.581488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.581594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.581619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.581705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.581731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.581811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.581848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.581936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.581961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.582048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.582074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.582165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.582190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.582322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.582347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.582441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.582467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.582549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.582575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.582688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.582712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.582793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.582819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.582908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.582934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.583022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.583047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.583136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.583162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.583255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.583281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.583403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.583442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.583558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.583585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.583724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.583753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.583842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.583869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.583978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.584005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.584097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.584123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.584241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.584268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.584386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.584411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.584487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.584513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.584600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.584626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.584755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.584794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.584898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.584925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.585008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.585033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.585127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.585152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.585291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.585317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.585402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.585430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.585512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.585538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.585635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.585675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.585766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.585793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.585891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.585919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.436 [2024-11-06 09:05:11.586033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.436 [2024-11-06 09:05:11.586059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.436 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.586173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.586199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.586308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.586335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.586450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.586476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.586558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.586586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.586668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.586696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.586842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.586870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.586981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.587008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.587111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.587137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.587241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.587266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.587359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.587386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.587475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.587502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.587619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.587644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.587757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.587783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.587868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.587893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.587985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.588011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.588104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.588129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.588212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.588238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.588336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.588377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.588494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.588525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.588638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.588667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.588786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.588812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.588910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.588937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.589023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.589049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.589185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.589211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.589323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.589349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.589435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.589463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.589547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.589574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.589725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.589754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.589856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.589884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.589996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.590022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.590133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.590159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.590243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.590269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.590362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.590388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.590474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.590500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.590582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.590609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.590691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.590717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.590828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.590861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.590974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.590998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.591080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.591105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.591202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.591227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.591321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.591361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.437 [2024-11-06 09:05:11.591451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.437 [2024-11-06 09:05:11.591480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.437 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.591577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.591603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.591687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.591713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.591848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.591875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.591965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.591993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.592078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.592103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.592196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.592221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.592325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.592350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.592466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.592491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.592567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.592593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.592706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.592733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.592873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.592900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.592986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.593012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.593123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.593150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.593278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.593306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.593450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.593476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.593552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.593577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.593649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.593675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.593784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.593824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.593930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.593957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.594041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.594066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.594200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.594225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.594335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.594360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.594439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.594466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.594580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.594606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.594686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.594717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.594835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.594863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.594971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.594998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.595113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.595139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.595225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.595252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.595343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.595370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.595492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.595518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.595601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.595626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.595744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.595770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.595875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.595901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.595984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.596010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.596090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.596116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.596224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.596249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.596341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.596366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.596481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.596507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.596628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.596657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.596738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.596764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.596907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.438 [2024-11-06 09:05:11.596936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.438 qpair failed and we were unable to recover it. 00:28:58.438 [2024-11-06 09:05:11.597049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.597076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.597166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.597196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.597310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.597336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.597415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.597442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.597522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.597548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.597629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.597657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.597769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.597794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.597928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.597968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.598095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.598123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.598208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.598234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.598315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.598343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.598461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.598488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.598603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.598629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.598749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.598775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.598888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.598914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.599005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.599031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.599119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.599144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.599230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.599256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.599349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.599387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.599477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.599505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.599585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.599611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.599715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.599741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.599855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.599881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.599962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.599988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.600073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.600099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.600208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.600233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.600342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.600367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.600446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.600470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.600576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.600606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.600717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.600744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.600822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.600852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.600933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.600958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.601038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.601063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.601179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.601205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.601280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.601304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.601420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.601449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.601531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.601558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.601655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.601695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.601854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.601883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.601984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.602012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.602127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.602154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.602268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.602294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.602435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.439 [2024-11-06 09:05:11.602461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.439 qpair failed and we were unable to recover it. 00:28:58.439 [2024-11-06 09:05:11.602549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.602576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.602676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.602703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.602788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.602814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.602932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.602958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.603045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.603070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.603153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.603180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.603292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.603320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.603405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.603431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.603565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.603606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.603718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.603746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.603853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.603881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.603995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.604021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.604102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.604129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.604220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.604245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.604352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.604377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.604459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.604485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.604575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.604600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.604679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.604706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.604809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.604839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.604962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.604991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.605081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.605108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.605193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.605220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.605334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.605360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.605457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.605484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.605595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.605621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.605711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.605736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.605827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.605867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.605950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.440 [2024-11-06 09:05:11.605976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.440 qpair failed and we were unable to recover it. 00:28:58.440 [2024-11-06 09:05:11.606060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.606085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.606197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.606223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.606311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.606336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.606412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.606436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.606521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.606546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.606643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.606683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.606773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.606800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.606910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.606939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.607024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.607050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.607143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.607169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.607257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.607285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.607373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.607400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.607486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.607516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.607600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.607626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.607713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.607739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.607784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.441 [2024-11-06 09:05:11.607839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.441 [2024-11-06 09:05:11.607857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.441 [2024-11-06 09:05:11.607857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.607871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.441 [2024-11-06 09:05:11.607883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.441 [2024-11-06 09:05:11.607882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.607960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.607985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.608061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.608086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.608206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.608232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.608368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.608393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.608467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.608493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.608574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.608601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.608711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.608744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.608873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.608901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.608983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.609011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.609093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.609120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.609233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.609259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.609344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.609371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.609460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.609487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.609489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:58.441 [2024-11-06 09:05:11.609521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:58.441 [2024-11-06 09:05:11.609599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.609623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.441 [2024-11-06 09:05:11.609569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:58.441 [2024-11-06 09:05:11.609572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:58.441 [2024-11-06 09:05:11.609736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.441 [2024-11-06 09:05:11.609761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.441 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.609867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.609892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.609980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.610007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.610102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.610129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.610214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.610240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.610324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.610350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.610424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.610450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.610558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.610584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.610675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.610702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.610786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.610813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.610921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.610950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.611040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.611066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.611156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.611182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.611266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.611293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.611382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.611410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.611499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.611525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.611633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.611661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.611744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.611771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.611862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.611889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.611968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.611994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.612072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.612098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.612174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.612201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.612282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.612308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.612406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.612431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.612512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.612539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.612636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.612664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.612756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.612784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.612887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.612914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.613003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.613029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.613119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.613145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.613259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.613285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.613358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.613389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.613476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.613504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.613592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.613619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.442 [2024-11-06 09:05:11.613704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.442 [2024-11-06 09:05:11.613733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.442 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.613816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.613855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.613949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.613975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.614086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.614112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.614205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.614230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.614315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.614340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.614427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.614453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.614560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.614586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.614694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.614734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.614867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.614895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.614981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.615008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.615100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.615126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.615243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.615269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.615374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.615400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.615512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.615539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.615623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.615649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.615774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.615812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.615932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.615959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.616042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.616068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.616189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.616214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.616298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.616323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.616402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.616428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.616520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.616548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.616644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.616674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.616799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.616845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.616931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.616959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.617045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.617071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.617160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.617186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.617266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.617291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.617378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.617406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.617504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.617542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.617663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.617692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.617776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.617803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.443 [2024-11-06 09:05:11.617936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.443 [2024-11-06 09:05:11.617962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.443 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.618074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.618100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.618201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.618228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.618351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.618377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.618465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.618494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.618581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.618608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.618689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.618716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.618837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.618864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.618951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.618978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.619059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.619084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.619225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.619251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.619326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.619352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.619453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.619492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.619583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.619611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.619697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.619723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.619804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.619838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.619931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.619958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.620042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.620068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.620163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.620190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.620277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.620303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.620384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.620410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.620498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.620524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.620635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.620665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.620786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.620814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.620910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.620937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.621032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.621059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.621150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.621176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.621264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.621291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.621376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.621403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.621488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.621514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.621637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.621678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.621824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.621874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.621957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.621983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.622055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.622080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.622174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.444 [2024-11-06 09:05:11.622200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.444 qpair failed and we were unable to recover it. 00:28:58.444 [2024-11-06 09:05:11.622293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.622319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.622412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.622438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.622516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.622541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.622646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.622686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.622776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.622806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.622916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.622956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.623052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.623082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.623201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.623229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.623309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.623336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.623418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.623443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.623535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.623565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.623649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.623677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.623767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.623795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.623887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.623916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.624014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.624048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.624147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.624173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.624277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.624303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.624395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.624422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.624507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.624535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.624629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.624654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.624763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.624790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.624880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.624907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.624985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.625011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.625126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.625152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.625232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.625258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.625344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.625372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.625448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.625475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.625627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.625666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.445 [2024-11-06 09:05:11.625754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.445 [2024-11-06 09:05:11.625780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.445 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.625884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.625912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.626005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.626031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.626116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.626146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.626228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.626253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.626335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.626361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.626453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.626482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.626574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.626600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.626709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.626746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.626850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.626877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.626961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.626987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.627092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.627118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.627231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.627257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.627334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.627360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.627446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.627474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.627560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.627586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.627671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.627699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.627792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.627823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.627928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.627955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.628039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.628066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.628157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.628185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.628263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.628290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.628377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.628405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.628495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.628523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.628605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.628633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.628718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.628745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.628844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.628871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.628979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.629005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.629106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.629142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.629220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.629247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.629356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.629382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.629471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.629500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.629579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.629606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.629682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.629708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.446 [2024-11-06 09:05:11.629790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.446 [2024-11-06 09:05:11.629823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.446 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.629913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.629939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.630019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.630045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.630133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.630161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.630246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.630271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.630356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.630381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.630465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.630492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.630601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.630627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.630720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.630759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.630867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.630895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.630983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.631012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.631100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.631133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.631209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.631235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.631342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.631368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.631450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.631483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.631576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.631605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.631696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.631722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.631796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.631845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.631926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.631952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.632031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.632057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.632153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.632181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.632269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.632296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.632377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.632404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.632481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.632507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.632589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.632616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.632696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.632722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.632800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.632828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.632924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.632950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.633031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.633057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.633141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.633166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.633246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.633274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.633367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.633394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.633481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.633509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.633595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.633621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.447 [2024-11-06 09:05:11.633702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.447 [2024-11-06 09:05:11.633728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.447 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.633808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.633851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.633941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.633967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.634053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.634080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.634179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.634206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.634285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.634312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.634391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.634417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.634508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.634554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.634645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.634673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.634750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.634776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.634888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.634915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.635014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.635054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.635152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.635181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.635264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.635290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.635400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.635426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.635503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.635528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.635608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.635633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.635719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.635747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.635853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.635882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.636001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.636028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.636101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.636127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.636209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.636236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.636330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.636358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.636470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.636497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.636587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.636615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.636722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.636747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.636841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.636867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.636957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.636983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.637064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.637089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.637176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.637205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.637295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.637321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.637403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.637432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.637540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.637566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.637651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.448 [2024-11-06 09:05:11.637679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.448 qpair failed and we were unable to recover it. 00:28:58.448 [2024-11-06 09:05:11.637764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.637792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.637886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.637913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.637998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.638024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.638105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.638143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.638227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.638253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.638350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.638378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.638493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.638521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.638617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.638644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.638729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.638756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.638865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.638893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.638979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.639005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.639086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.639113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.639191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.639217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.639292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.639322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.639402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.639429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.639532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.639571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.639681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.639721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.639817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.639852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.639936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.639963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.640045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.640070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.640187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.640214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.640300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.640327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.640410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.640437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.640528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.640556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.640649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.640677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.640791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.640818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.640913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.640940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.641038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.641064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.641189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.641215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.641303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.641329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.641412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.641440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.641526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.641556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.641652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.641681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.641767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.449 [2024-11-06 09:05:11.641793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.449 qpair failed and we were unable to recover it. 00:28:58.449 [2024-11-06 09:05:11.641897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.641925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.642007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.642034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.642125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.642151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.642239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.642265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.642351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.642382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.642476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.642504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.642585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.642619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.642709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.642737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.642843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.642870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.642951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.642977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.643063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.643089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.643179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.643206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.643290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.643318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.643403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.643430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.643545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.643573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.643682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.643709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.643801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.643828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.643916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.643942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.644033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.644059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.644151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.644177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.644268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.644295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.644375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.644402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.644531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.644570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.644656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.644683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.644769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.644797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.450 [2024-11-06 09:05:11.644895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.450 [2024-11-06 09:05:11.644923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.450 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.645017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.645043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.645128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.645156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.645240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.645267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.645353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.645384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.645470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.645496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.645591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.645617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.645707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.645734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.645842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.645869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.645978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.646007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.646090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.646126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.646209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.646235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.646344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.646370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.646448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.646473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.646557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.646585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.646685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.646710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.646807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.646872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.646963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.646990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.647080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.647118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.647204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.647230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.647323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.647351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.647433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.647463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.647540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.647567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.647650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.647675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.647761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.647789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.647881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.647907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.647996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.648022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.648109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.736 [2024-11-06 09:05:11.648142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.736 qpair failed and we were unable to recover it. 00:28:58.736 [2024-11-06 09:05:11.648227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.648254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.648339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.648367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.648451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.648477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.648573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.648601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.648712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.648738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.648861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.648894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.648982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.649008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.649096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.649123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.649219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.649245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.649328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.649356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.649447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.649475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.649588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.649616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.649732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.649759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.649861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.649889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.649979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.650005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.650112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.650142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.650254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.650280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.650398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.650424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.650541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.650568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.650662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.650691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.650784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.650811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.650902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.650928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.651022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.651049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.651168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.651193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.651283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.651311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.651402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.651429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.651514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.651542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.651624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.651650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.651725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.651752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.651849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.651877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.651996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.652022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.652104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.737 [2024-11-06 09:05:11.652130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.737 qpair failed and we were unable to recover it. 00:28:58.737 [2024-11-06 09:05:11.652214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.652251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.652331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.652358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.652444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.652471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.652560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.652586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.652667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.652694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.652770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.652797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.652907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.652948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.653037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.653064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.653156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.653182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.653267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.653292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.653369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.653395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.653477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.653508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.653626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.653654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.653750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.653778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.653901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.653928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.654049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.654075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.654164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.654190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.654265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.654293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.654376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.654405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.654495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.654523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.654604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.654629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.654720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.654746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.654854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.654881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.654966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.654992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.655074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.655101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.655207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.655233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.655320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.655349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.655469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.655495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.655589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.655622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.655736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.655763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.655848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.655875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.655963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.655989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.656083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.656108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.738 [2024-11-06 09:05:11.656190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.738 [2024-11-06 09:05:11.656216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.738 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.656301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.656329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.656415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.656442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.656586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.656631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.656730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.656759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.656876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.656905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.656988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.657016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.657103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.657138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.657229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.657256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.657368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.657394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.657489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.657517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.657636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.657664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.657747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.657773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.657873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.657900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.657981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.658007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.658083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.658109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.658202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.658227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.658308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.658333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.658422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.658450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.658564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.658590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.658676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.658703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.658805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.658837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.658924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.658952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.659076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.659102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.659199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.659227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.659311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.659336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.659418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.659445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.659528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.659554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.659638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.659664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.659774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.659800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.659902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.659930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.660026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.660051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.660137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.660171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.739 [2024-11-06 09:05:11.660253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.739 [2024-11-06 09:05:11.660279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.739 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.660373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.660402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.660481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.660515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.660608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.660634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.660709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.660735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.660819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.660856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.660935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.660962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.661049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.661076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.661197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.661223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.661325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.661354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.661445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.661474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.661566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.661592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.661683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.661716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.661797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.661824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.661941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.661967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.662054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.662080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.662175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.662202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.662293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.662329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.662412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.662439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.662519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.662547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.662639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.662676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.662794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.662822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.662920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.662946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.663027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.663053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.663172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.663199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.663290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.663316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.663402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.663429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.663515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.663541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.663622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.663649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.663733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.663766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.663856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.663883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.663963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.663989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.664102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.664138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.664232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.740 [2024-11-06 09:05:11.664258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.740 qpair failed and we were unable to recover it. 00:28:58.740 [2024-11-06 09:05:11.664370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.664397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.664478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.664511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.664594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.664620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.664732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.664761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.664854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.664882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.664964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.664990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.665084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.665109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.665200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.665224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.665299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.665325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.665416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.665446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.665544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.665571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.665665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.665711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.665805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.665839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.665931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.665958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.666038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.666064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.666159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.666186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.666296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.666322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.666407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.666432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.666526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.666553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.666634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.666661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.666744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.666772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.666876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.666916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.667001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.667028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.667137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.667162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.667254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.667281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.667365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.667399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.667507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.667534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.667618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.741 [2024-11-06 09:05:11.667645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.741 qpair failed and we were unable to recover it. 00:28:58.741 [2024-11-06 09:05:11.667771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.667801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.667892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.667919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.668009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.668037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.668128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.668154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.668275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.668301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.668395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.668427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.668522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.668560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.668649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.668676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.668763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.668802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.668909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.668938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.669018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.669046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.669128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.669153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.669242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.669268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.669361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.669386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.669465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.669489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.669576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.669601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.669709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.669735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.669818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.669852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.669939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.669967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.670058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.670085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.670195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.670228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.670317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.670343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.670437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.670464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.670552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.670578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.670660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.670687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.670801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.670829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.670929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.670956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.671032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.671058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.671154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.671181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.671276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.671302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.671388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.671412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.671496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.671521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.671602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.671627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.742 [2024-11-06 09:05:11.671724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.742 [2024-11-06 09:05:11.671764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.742 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.671862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.671896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.672000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.672027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.672131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.672156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.672238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.672265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.672358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.672383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.672462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.672489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.672568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.672593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.672686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.672713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.672797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.672826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.672920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.672951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.673032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.673059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.673154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.673181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.673269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.673296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.673384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.673413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.673528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.673556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.673656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.673683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.673769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.673797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.673921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.673950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.674036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.674063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.674171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.674199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.674283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.674313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.674425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.674452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.674537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.674563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.674643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.674676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.674761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.674789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.674893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.674921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.675016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.675042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.675122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.675159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.675272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.675297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.675378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.675413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.675509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.675535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.675620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.675648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.743 qpair failed and we were unable to recover it. 00:28:58.743 [2024-11-06 09:05:11.675760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.743 [2024-11-06 09:05:11.675787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.675897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.675924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.676008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.676035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.676123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.676153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.676231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.676258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.676343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.676369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.676490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.676518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.676604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.676630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.676717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.676745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.676849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.676893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.676977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.677005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.677093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.677119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.677218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.677246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.677328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.677354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.677451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.677477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.677549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.677576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.677666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.677706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.677843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.677871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.677957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.677984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.678079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.678105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.678215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.678241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.678323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.678352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.678433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.678460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.678555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.678585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.678669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.678696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.678782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.678808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.678903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.678929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.679004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.679031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.679114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.679140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.679231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.679257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.679374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.679402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.679489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.679516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.679597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.679624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.679708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.744 [2024-11-06 09:05:11.679734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.744 qpair failed and we were unable to recover it. 00:28:58.744 [2024-11-06 09:05:11.679816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.679855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.679941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.679971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.680054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.680079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.680167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.680192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.680279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.680307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.680430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.680457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.680531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.680559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.680644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.680670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.680753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.680781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.680881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.680909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.681004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.681032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.681112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.681148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.681237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.681263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.681370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.681395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.681479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.681505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.681612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.681659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.681756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.681785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.681896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.681922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.682006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.682032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.682109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.682145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.682226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.682251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.682350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.682376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.682472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.682503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.682593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.682621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.682719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.682747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.682851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.682878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.682966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.682992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.683076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.683101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.683210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.683241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.683330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.683355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.683466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.683494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.683580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.683608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.683693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.683718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.745 qpair failed and we were unable to recover it. 00:28:58.745 [2024-11-06 09:05:11.683810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.745 [2024-11-06 09:05:11.683841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.683925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.683951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.684037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.684063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.684148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.684174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.684260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.684288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.684380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.684419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.684510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.684537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.684620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.684646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.684725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.684751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.684852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.684880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.684969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.684997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.685103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.685129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.685213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.685238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.685315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.685341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.685427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.685454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.685568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.685599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.685688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.685715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.685802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.685828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.685931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.685956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.686036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.686062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.746 [2024-11-06 09:05:11.686151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.746 [2024-11-06 09:05:11.686178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.746 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.686258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.686286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.686366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.686395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.686477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.686503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.686580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.686608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.686683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.686709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.686794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.686820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.686920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.686947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.687027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.687053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.687142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.687169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.687255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.687281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.687365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.687394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.687474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.687500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.687586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.687615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.687713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.687740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.687826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.687863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.687954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.687979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.688059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.688085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.688159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.688185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.688260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.688284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.688370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.688395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.688472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.688497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.688584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.688610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.688700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.688729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.688813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.688846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.688939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.688965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.689055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.689082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.689168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.689194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.689300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.689326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.689425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.689453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.689541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.689569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.689679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.689705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.689783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.747 [2024-11-06 09:05:11.689810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.747 qpair failed and we were unable to recover it. 00:28:58.747 [2024-11-06 09:05:11.689915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.689942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.690021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.690049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.690136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.690163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.690248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.690274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.690356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.690381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.690458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.690484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.690562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.690588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.690667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.690693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.690781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.690806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.690898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.690928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.691013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.691039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.691120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.691147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.691226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.691252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.691327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.691353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.691439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.691465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.691555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.691580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.691669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.691695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.691826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.691859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.691960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.691987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.692071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.692096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.692180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.692206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.692285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.692310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.692388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.692413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.692510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.692537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.692618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.692643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.692741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.692780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.692883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.692910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.693000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.693026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.693108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.693145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.693222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.693247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.693330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.693359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.693456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.693484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.693586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.748 [2024-11-06 09:05:11.693628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.748 qpair failed and we were unable to recover it. 00:28:58.748 [2024-11-06 09:05:11.693715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.693743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.693859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.693886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.693965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.693991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.694106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.694134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.694218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.694247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.694338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.694364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.694450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.694477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.694556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.694582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.694666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.694692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.694770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.694797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.694889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.694916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.694997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.695024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.695109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.695135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.695214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.695240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.695326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.695353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.695430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.695457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.695545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.695576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.695660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.695686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.695774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.695799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.695890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.695916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.695993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.696020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.696106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.696132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.696209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.696236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.696316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.696342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.696426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.696455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.696545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.696573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.696679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.696706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.696794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.696829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.696920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.696945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.697020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.697045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.697132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.697159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.697246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.697274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.749 [2024-11-06 09:05:11.697355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.749 [2024-11-06 09:05:11.697382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.749 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.697470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.697498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.697582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.697607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.697693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.697718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.697799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.697825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.697924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.697953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.698042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.698069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.698175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.698200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.698302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.698328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.698411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.698436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.698515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.698540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.698618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.698650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.698766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.698795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.698887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.698916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.698993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.699020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.699100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.699126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.699206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.699231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.699322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.699349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.699468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.699495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.699584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.699611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.699697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.699725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.699843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.699869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.699955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.699980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.700070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.700096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.700174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.700199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.700286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.700311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.700418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.700444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.700524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.700552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.700636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.700663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.700744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.700771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.700868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.700895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.700973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.700999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.701077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.701103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.750 [2024-11-06 09:05:11.701183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.750 [2024-11-06 09:05:11.701210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.750 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.701300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.701325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.701402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.701428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.701513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.701538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.701636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.701664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.701761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.701793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.701890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.701918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.701999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.702025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.702105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.702132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.702212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.702238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.702313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.702339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.702419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.702445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.702546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.702585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.702671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.702698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.702787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.702815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.702913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.702939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.703051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.703078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.703166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.703192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.703278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.703304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.703405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.703433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.703519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.703547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.703640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.703668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.703761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.703791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.703901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.703928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.704007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.704032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.704123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.704149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.704237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.704265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.704355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.704383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.704473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.704501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.704589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.704615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.704692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.704719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.704797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.704823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.704927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.704954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.705045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.751 [2024-11-06 09:05:11.705076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.751 qpair failed and we were unable to recover it. 00:28:58.751 [2024-11-06 09:05:11.705201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.705227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.705313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.705340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.705431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.705457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.705572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.705597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.705679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.705705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.705793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.705821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.705960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.705999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.706100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.706127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.706221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.706248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.706337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.706363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.706473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.706499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.706612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.706641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.706718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.706744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.706825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.706859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.706944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.706971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.707070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.707118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.707218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.707246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.707387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.707414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.707527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.707554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.707632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.707658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.707745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.707772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.707862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.707900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.707981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.708006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.708089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.708115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.708201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.708226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.708309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.708335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.708417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.708444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.708576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.708604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.708699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.708728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.752 [2024-11-06 09:05:11.708824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.752 [2024-11-06 09:05:11.708857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.752 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.708943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.708968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.709051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.709077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.709159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.709186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.709277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.709304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.709437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.709465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.709581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.709608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.709698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.709724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.709840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.709867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.709971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.709998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.710084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.710111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.710224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.710249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.710363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.710388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.710471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.710497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.710572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.710598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.710675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.710702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.710780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.710808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.710907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.710936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.711020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.711046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.711191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.711217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.711296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.711321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.711402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.711428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.711543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.711571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.711664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.711690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.711776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.711802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.711892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.711918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.712011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.712038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.712154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.712181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.712264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.712290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.712375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.712404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.712486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.712514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.712612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.712639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.712716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.753 [2024-11-06 09:05:11.712743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.753 qpair failed and we were unable to recover it. 00:28:58.753 [2024-11-06 09:05:11.712841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.712870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.712963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.712989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.713067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.713094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.713181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.713210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.713309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.713348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.713434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.713460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.713545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.713573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.713656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.713681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.713764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.713789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.713883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.713909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.713994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.714020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.714101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.714126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.714209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.714234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.714324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.714353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.714440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.714470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.714696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.714736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.714838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.714880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.714975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.715003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.715090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.715117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.715198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.715224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.715307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.715335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.715453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.715482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.715571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.715599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.715686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.715714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.715826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.715859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.715951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.715977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.716055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.716081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.716194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.716219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.716293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.716319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.716409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.716435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.716523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.716551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.716650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.716688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.716806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.716840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.754 [2024-11-06 09:05:11.716929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.754 [2024-11-06 09:05:11.716955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.754 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.717036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.717062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.717145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.717171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.717301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.717327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.717407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.717433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.717547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.717577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.717661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.717689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.717776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.717804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.717903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.717930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.718012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.718038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.718142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.718170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.718255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.718281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.718357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.718384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.718470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.718498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.718581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.718610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.718695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.718722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.718805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.718845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.718937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.718964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.719047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.719073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.719167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.719193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.719288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.719315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.719391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.719418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.719529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.719556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.719639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.719671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.719753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.719781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.719867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.719896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.720005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.720031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.720120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.720146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.720227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.720252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.720328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.720353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.720440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.720464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.720541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.720567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.720684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.755 [2024-11-06 09:05:11.720713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.755 qpair failed and we were unable to recover it. 00:28:58.755 [2024-11-06 09:05:11.720812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.720844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.720925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.720952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.721039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.721066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.721147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.721173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.721260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.721287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.721373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.721400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.721482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.721511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.721589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.721616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.721696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.721722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.721801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.721827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.721917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.721943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.722027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.722054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.722144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.722171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.722279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.722305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.722387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.722413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.722491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.722516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.722591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.722616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.722701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.722733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.722821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.722854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.722958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.723000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.723090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.723116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.723232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.723257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.723372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.723398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.723482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.723507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.723583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.723609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.723696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.723722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.723801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.723828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.723936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.723963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.724040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.724066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.724157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.724184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.724262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.724288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.724418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.724443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.724535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.724566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.756 [2024-11-06 09:05:11.724653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.756 [2024-11-06 09:05:11.724682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.756 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.724763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.724790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.724902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.724930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.725020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.725045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.725143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.725169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.725258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.725284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.725364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.725389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.725465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.725490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.725597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.725623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.725708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.725737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.725843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.725887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.725980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.726009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.726093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.726128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.726212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.726239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.726335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.726374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.726463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.726489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.726567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.726594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.726699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.726725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.726811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.726843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.726932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.726961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.727064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.727091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.727173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.727199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.727296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.727321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.727401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.727427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.727519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.727564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.727649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.727675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.727788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.727817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.727921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.757 [2024-11-06 09:05:11.727948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.757 qpair failed and we were unable to recover it. 00:28:58.757 [2024-11-06 09:05:11.728030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.728057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.728157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.728182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.728267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.728295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.728386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.728415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.728503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.728530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.728605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.728630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.728767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.728793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.728883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.728909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.728995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.729021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.729138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.729163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.729277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.729302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.729383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.729411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.729497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.729523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.729609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.729637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.729713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.729739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.729821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.729853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.729936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.729962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:58.758 [2024-11-06 09:05:11.730044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.730069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.730154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:58.758 [2024-11-06 09:05:11.730179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.730262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.730287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:58.758 [2024-11-06 09:05:11.730369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.730395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:58.758 [2024-11-06 09:05:11.730502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.730528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.758 [2024-11-06 09:05:11.730612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.730638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.730722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.730750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.730875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.730903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.731000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.731026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.731115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.731148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.731233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.731259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.731347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.731374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.731460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.731488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.758 [2024-11-06 09:05:11.731576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.758 [2024-11-06 09:05:11.731603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.758 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.731698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.731738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.731846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.731876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.731964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.731991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.732077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.732110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.732196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.732222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.732313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.732339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.732425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.732453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.732543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.732571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.732685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.732711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.732798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.732825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.732938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.732966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.733049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.733077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.733169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.733197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.733279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.733306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.733386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.733411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.733487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.733512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.733594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.733621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.733718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.733759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.733844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.733877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.733956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.733982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.734068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.734094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.734211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.734237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.734327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.734353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.734447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.734474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.734553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.734579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.734654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.734680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.734755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.734783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.734870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.734899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.735002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.735040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.735127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.735155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.735250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.735285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.735363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.735390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.759 [2024-11-06 09:05:11.735468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.759 [2024-11-06 09:05:11.735495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.759 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.735577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.735603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.735710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.735735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.735815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.735846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.735937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.735963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.736043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.736069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.736161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.736187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.736263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.736288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.736376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.736404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.736490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.736516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.736629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.736654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.736746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.736774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.736891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.736918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.736998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.737025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.737111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.737140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.737225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.737252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.737349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.737388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.737508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.737536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.737612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.737638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.737720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.737748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.737842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.737880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.737981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.738020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.738105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.738133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.738217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.738242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.738318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.738345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.738435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.738464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.738583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.738610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.738688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.738715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.738794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.738819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.738921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.738948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.739032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.739059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.739140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.739167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.739255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.739281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.739358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.760 [2024-11-06 09:05:11.739384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.760 qpair failed and we were unable to recover it. 00:28:58.760 [2024-11-06 09:05:11.739468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.739494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.739599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.739625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.739703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.739731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.739824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.739859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.739945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.739977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.740067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.740094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.740184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.740211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.740295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.740323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.740399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.740425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.740509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.740535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.740641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.740668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.740754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.740780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.740863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.740898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.740984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.741010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.741091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.741118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.741199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.741225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.741305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.741330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.741419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.741445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.741535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.741562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.741668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.741694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.741810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.741843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.741930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.741956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.742035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.742061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.742144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.742171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.742253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.742280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.742360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.742385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.742465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.742491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.742585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.742611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.742692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.742718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.742816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.742863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.742962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.742991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.743080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.743112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.743205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.743232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.761 qpair failed and we were unable to recover it. 00:28:58.761 [2024-11-06 09:05:11.743319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.761 [2024-11-06 09:05:11.743346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.743432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.743458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.743535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.743561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.743640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.743665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.743762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.743802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.743905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.743934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.744020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.744050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.744153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.744179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.744261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.744288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.744371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.744396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.744486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.744512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.744597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.744623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.744705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.744730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.744810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.744842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.744926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.744951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.745033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.745059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.745144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.745169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.745253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.745280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.745358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.745384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.745466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.745500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.745586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.745613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.745696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.745721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.745803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.745835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.745920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.745946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.746032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.746057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.746144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.746174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.746253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.746278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.746360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.746389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.746469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.746498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.746595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.762 [2024-11-06 09:05:11.746636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.762 qpair failed and we were unable to recover it. 00:28:58.762 [2024-11-06 09:05:11.746732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.746759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.746858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.746894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.746970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.746995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.747071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.747097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.747172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.747197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.747304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.747331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.747414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.747445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.747537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.747578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.747697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.747724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.747813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.747850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.747940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.747967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.748053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.748079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.748163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.748188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.748274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.748300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.748379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.748405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.748514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.748540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.748625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.748651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.748739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.748765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.748852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.748889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.748976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.749005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.749086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.749113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.749198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.749224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.749310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.749336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.749415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.749442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.749521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.749547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.749642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.749669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.749796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.749827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.749921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.749949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.750031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.750057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.750132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.750159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.750235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.750261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.750368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.750394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.750476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.750502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.763 qpair failed and we were unable to recover it. 00:28:58.763 [2024-11-06 09:05:11.750591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.763 [2024-11-06 09:05:11.750620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.750708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.750737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.750828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.750869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.750958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.750984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.751067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.751093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.751181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.751208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.751315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.751341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.751426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.751451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.751528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.751553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.751645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.751673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.751751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.751778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.764 [2024-11-06 09:05:11.751874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.751904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.751992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.752018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b9 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:58.764 0 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.752112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.752140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.764 [2024-11-06 09:05:11.752233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.752272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.752360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.752388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.752476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.752503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.752602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.752632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.752716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.752741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.752816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.752846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.752925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.752950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.753029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.753055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.753140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.753165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.753249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.753277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.753392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.753420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.753509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.753536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.753617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.753643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.753749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.753780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.753860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.753898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.753981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.754006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.754089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.754115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.754248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.764 [2024-11-06 09:05:11.754275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.764 qpair failed and we were unable to recover it. 00:28:58.764 [2024-11-06 09:05:11.754362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.754390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.754471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.754500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.754586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.754618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.754699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.754726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.754803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.754829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.754928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.754954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.755033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.755059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.755137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.755164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.755250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.755277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.755370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.755398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.755481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.755507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.755587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.755613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.755700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.755726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.755810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.755845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.755927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.755953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.756030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.756055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.756132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.756161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.756243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.756271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.756348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.756375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.756467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.756495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.756583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.756609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.756715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.756740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.756827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.756860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.756947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.756974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.757051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.757077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.757183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.757209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.757287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.757312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.757387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.757413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.757513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.757538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.757619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.757646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.757733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.757762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.757858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.757885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.757974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.758001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.758084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.765 [2024-11-06 09:05:11.758110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.765 qpair failed and we were unable to recover it. 00:28:58.765 [2024-11-06 09:05:11.758227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.758255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.758337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.758364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.758463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.758490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.758572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.758598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.758680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.758706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.758796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.758822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.758926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.758952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.759035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.759061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.759146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.759171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.759266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.759293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.759393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.759421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.759517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.759545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.759626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.759653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.759731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.759758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.759854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.759894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.759982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.760008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.760091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.760117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.760229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.760254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.760340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.760368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.760457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.760485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.760569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.760596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.760680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.760707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.760788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.760813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.760917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.760944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.761031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.761058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.761140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.761167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.761264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.761289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.761368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.761399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.761480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.761512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.761607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.761635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.761721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.761749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.761837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.761863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.761955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.761981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.766 [2024-11-06 09:05:11.762067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.766 [2024-11-06 09:05:11.762092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.766 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.762178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.762203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.762320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.762347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.762443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.762470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.762557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.762585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.762672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.762698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.762782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.762808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.762933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.762961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.763073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.763100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.763198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.763225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.763305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.763332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.763411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.763438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.763555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.763593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.763714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.763753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.763863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.763897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.763981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.764006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.764115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.764140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.764256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.764281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.764375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.764400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.764478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.764504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.764619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.764645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.764726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.764754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.764853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.764893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.764988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.765016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.765130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.765156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.765237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.765262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.765383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.765409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.765500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.765527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.767 [2024-11-06 09:05:11.765639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.767 [2024-11-06 09:05:11.765666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.767 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.765750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.765776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.765862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.765900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.765985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.766011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.766088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.766114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.766208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.766234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.766337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.766364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.766468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.766508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.766632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.766659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.766738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.766763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.766874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.766913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.766993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.767018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.767105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.767130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.767208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.767233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.767316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.767342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.767434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.767459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.767549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.767577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.767668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.767697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.767791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.767820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.767928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.767954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.768032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.768058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.768152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.768178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.768264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.768296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.768378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.768404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.768486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.768511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.768592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.768618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.768701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.768726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.768829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.768867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.768956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.768983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.769067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.769094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.769209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.769235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.769352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.769379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.769457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.769482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.769598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.768 [2024-11-06 09:05:11.769624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.768 qpair failed and we were unable to recover it. 00:28:58.768 [2024-11-06 09:05:11.769704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.769735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.769826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.769861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.769939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.769965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.770041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.770067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.770162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.770188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.770298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.770325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.770449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.770474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.770554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.770581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.770662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.770689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.770779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.770804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.770909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.770939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.771029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.771056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.771137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.771163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.771249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.771276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.771372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.771399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.771519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.771559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.771686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.771725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.771818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.771851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.771946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.771972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.772050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.772076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.772159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.772185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.772274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.772301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.772436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.772467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.772589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.772629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.772749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.772776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.772883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.772910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.772988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.773014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.773095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.773123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.773240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.773266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.773345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.773370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.773458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.773484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.773592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.773618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.773704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.773732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.773814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.769 [2024-11-06 09:05:11.773847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.769 qpair failed and we were unable to recover it. 00:28:58.769 [2024-11-06 09:05:11.773938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.773964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.774045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.774072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.774172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.774199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.774284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.774312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.774431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.774457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.774548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.774574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.774662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.774694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.774773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.774799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.774909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.774937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.775019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.775045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.775121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.775146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.775253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.775279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.775363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.775392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.775521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.775549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.775632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.775658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.775763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.775789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.775890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.775916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.776001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.776029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.776116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.776142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.776254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.776280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.776368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.776393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.776475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.776501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.776620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.776648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.776734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.776760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.776842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.776869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.776958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.776984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.777066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.777091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.777182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.777208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.777291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.777317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.777400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.777427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.777505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.777531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.777619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.777645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.777730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.777756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.770 [2024-11-06 09:05:11.777866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.770 [2024-11-06 09:05:11.777903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.770 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.777994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.778023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.778101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.778126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.778219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.778245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.778328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.778354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.778466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.778494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.778570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.778596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.778699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.778725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.778799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.778825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.778930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.778955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.779037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.779064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.779178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.779204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.779304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.779343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.779434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.779463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.779577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.779603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.779688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.779714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.779804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.779830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.779923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.779949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.780033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.780060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.780163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.780193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.780291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.780317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.780396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.780423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.780514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.780540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.780617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.780643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.780724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.780751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.780839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.780887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.780980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.781007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.781091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.781119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.781209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.781235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.781354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.781380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.781471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.781497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.781580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.781607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.781691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.781721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.771 [2024-11-06 09:05:11.781841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.771 [2024-11-06 09:05:11.781869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.771 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.781955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.781982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.782064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.782089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.782203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.782229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.782311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.782338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.782419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.782445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.782527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.782552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.782634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.782667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.782755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.782784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.782879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.782907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.783022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.783048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.783166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.783192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.783268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.783294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.783375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.783402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.783510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.783535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.783609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.783636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.783718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.783744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.783839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.783867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.783948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.783975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.784073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.784098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.784185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.784211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.784325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.784351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.784437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.784464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.784548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.784574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.784653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.784681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.784757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.784782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.784869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.784904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.784985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.785010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.785090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.785115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.785205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.785231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.785344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.785370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.785454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.785479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.785603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.785631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.772 qpair failed and we were unable to recover it. 00:28:58.772 [2024-11-06 09:05:11.785716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.772 [2024-11-06 09:05:11.785744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.785862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.785915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.786002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.786029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.786124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.786152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.786232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.786258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.786370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.786397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.786477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.786502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.786625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.786650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.786736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.786762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.786844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.786872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.786958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.786984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.787069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.787095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.787180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.787206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.787285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.787310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.787401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.787428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.787515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.787540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.787652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.787677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.787750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.787776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.787857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.787886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.787969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.787994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.788079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.788105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.788192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.788217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.788304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.788338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.788419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.788445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.788571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.788623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.788727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.788767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.788882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.788911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.789007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.789032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.789161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.789191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.789315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.789342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.773 [2024-11-06 09:05:11.789422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.773 [2024-11-06 09:05:11.789447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.773 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.789524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.789551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.789629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.789656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.789742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.789770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.789855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.789881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.789975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.790000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.790085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.790112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.790222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.790248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.790324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.790349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.790452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.790477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.790567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.790594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.790681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.790707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.790799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.790828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.790965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.790992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.791099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.791138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.791242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.791269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.791350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.791375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.791456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.791483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.791570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.791596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.791681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.791709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.791801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.791828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.791933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.791959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.792040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.792066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.792158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.792184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.792267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.792293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.792380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.792410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.792496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.792524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.792622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.792662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.792758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.792786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.792881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.792909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.792994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.793020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.793110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.793142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.793239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.793266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.793374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.774 [2024-11-06 09:05:11.793399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.774 qpair failed and we were unable to recover it. 00:28:58.774 [2024-11-06 09:05:11.793474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.793507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.793622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.793649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.793735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.793763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.793877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.793907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.794019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.794052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.794145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.794171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.794269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.794295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.794373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.794400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.794507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.794533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.794629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.794657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.794735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.794761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.794867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.794894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.794975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.795001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.795080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.795106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.795223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.795248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.795336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.795366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.795476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.795501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.795597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.795632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.795720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.795748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.795850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.795878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.795960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.795986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.796072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.796098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.796221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.796247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.796335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.796362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.796449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.796479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.796588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.796615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.796698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.796733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.796821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.796861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.796949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.796975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.797062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.797087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.797179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.797204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.797290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.797321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.797402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.797428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.775 [2024-11-06 09:05:11.797525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.775 [2024-11-06 09:05:11.797554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.775 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.797660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.797687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.797785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.797814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.797918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.797944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.798028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.798055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.798147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.798174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.798252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.798278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.798393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.798419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.798514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.798541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.798632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.798658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.798745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.798773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.798861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.798889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.798997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.799036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.799161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.799188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.799267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.799293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.799375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.799403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.799517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.799544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.799664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.799690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.799795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.799822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.799921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.799946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.800026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.800052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.800126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.800152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.800242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.800270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.800362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.800388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.800529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.800556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.800640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.800667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.800751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.800780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.800873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.800902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 Malloc0 00:28:58.776 [2024-11-06 09:05:11.800998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.801025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.801111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.801139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.801233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.801259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.801340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.776 [2024-11-06 09:05:11.801366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 [2024-11-06 09:05:11.801454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.801481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.776 qpair failed and we were unable to recover it. 00:28:58.776 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:58.776 [2024-11-06 09:05:11.801590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.776 [2024-11-06 09:05:11.801623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.801712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.801738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 wit 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.777 h addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.801828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.801878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.801970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.801996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.802080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.802105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.802191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.802216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.802315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.802355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.802442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.802470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.802560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.802587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.802669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.802695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.802776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.802802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.802896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.802924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.803023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.803050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.803135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.803160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.803257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.803282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.803366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.803392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.803480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.803509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.803602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.803628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.803713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.803741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.803826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.803858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.803943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.803970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.804089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.804115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.804198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.804225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.804308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.804336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.804424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.804451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.804533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.804558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.804596] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.777 [2024-11-06 09:05:11.804644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.804672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.804758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.804784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.804912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.804941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.805024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.805051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.805145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.805173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.805259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.805286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.805378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.777 [2024-11-06 09:05:11.805406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.777 qpair failed and we were unable to recover it. 00:28:58.777 [2024-11-06 09:05:11.805486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.805512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.805599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.805625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.805735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.805761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.805847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.805874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.805961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.805987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.806067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.806092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.806168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.806194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.806272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.806298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.806378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.806403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.806479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.806505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.806627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.806655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.806759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.806799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.806910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.806940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.807029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.807055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.807136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.807162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.807263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.807289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.807373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.807401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.807490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.807516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.807616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.807655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.807773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.807799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.807910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.807936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.808015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.808040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.808118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.808143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.808220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.808246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.808334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.808362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.808452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.808480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.808594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.808634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.808722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.808748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.808840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.808866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.778 qpair failed and we were unable to recover it. 00:28:58.778 [2024-11-06 09:05:11.808951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.778 [2024-11-06 09:05:11.808977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.809057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.809083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.809170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.809197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.809275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.809301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.809414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.809443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.809527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.809553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.809634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.809660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.809751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.809777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.809858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.809891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.809977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.810003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.810087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.810112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.810205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.810231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.810317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.810344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.810457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.810483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.810585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.810624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.810709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.810737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.810838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.810865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.810952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.810978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.811060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.811086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.811173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.811198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.811277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.811303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.811390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.811416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.811506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.811534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.811620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.811648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.811734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.811763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.811850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.811876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.811961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.811987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.812093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.812118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.812200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.812226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.812306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.812332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.812415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.812443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.812537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.812564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.812647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.779 [2024-11-06 09:05:11.812674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.779 qpair failed and we were unable to recover it. 00:28:58.779 [2024-11-06 09:05:11.812763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.812790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.780 [2024-11-06 09:05:11.812890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.812917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:58.780 [2024-11-06 09:05:11.813011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.813039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.813122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.780 [2024-11-06 09:05:11.813149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.813233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.780 [2024-11-06 09:05:11.813259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.813358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.813384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.813493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.813520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.813603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.813637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.813721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.813748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.813847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.813886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.813987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.814026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.814143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.814173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.814284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.814311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.814391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.814418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.814507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.814534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.814632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.814659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.814781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.814820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.814928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.814956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.815043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.815070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.815149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.815175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.815285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.815319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.815399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.815426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.815511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.815541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.815635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.815661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.815750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.815789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.815894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.815923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.816023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.816062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.816156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.816184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.816266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.816293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.816371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.816399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.816480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.816507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.816602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.816631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.816729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.816757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.816850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.816877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.816962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.780 [2024-11-06 09:05:11.816990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.780 qpair failed and we were unable to recover it. 00:28:58.780 [2024-11-06 09:05:11.817086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.817112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.817205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.817232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.817331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.817358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.817445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.817475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.817565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.817591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.817671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.817703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.817791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.817817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.817921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.817951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.818035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.818064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.818149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.818176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.818288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.818316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.818395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.818420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.818517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.818543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.818628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.818654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.818734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.818761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.818844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.818879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.818961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.818986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.819070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.819096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.819178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.819204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.819295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.819321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.819434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.819463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.819553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.819581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.819672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.819700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.819802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.819828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.819923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.819949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.820034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.820059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.820185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.820210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.820299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.820325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.820407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.820434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.820518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.820546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.820631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.820657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.820739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.820765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.820871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.781 [2024-11-06 09:05:11.820900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 [2024-11-06 09:05:11.820987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.781 [2024-11-06 09:05:11.821013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.781 qpair failed and we were unable to recover it. 00:28:58.781 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:58.781 [2024-11-06 09:05:11.821094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.821121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.782 [2024-11-06 09:05:11.821230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.821256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.821339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.821366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.821453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.821481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.821566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.821591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.821691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.821730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.821828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.821866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.821951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.821978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.822070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.822096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.822174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.822200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.822286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.822315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.822429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.822458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.822548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.822576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.822696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.822722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.822811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.822842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.822922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.822950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.823029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.823056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.823151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.823178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.823258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.823285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.823369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.823396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.823485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.823512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.823625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.823651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.823742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.823768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.823865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.823892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.823981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.824009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.824086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.824113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.824204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.824230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.824337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.824363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.824451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.824477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.824570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.824609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.824718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.824745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.824830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.824871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.824955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.824981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.825059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.825085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.825170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.825196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.782 [2024-11-06 09:05:11.825285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.782 [2024-11-06 09:05:11.825312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.782 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.825414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.825446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.825530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.825556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.825645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.825672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.825751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.825778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.825868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.825895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.825985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.826010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.826092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.826117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.826203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.826228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.826316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.826344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.826427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.826454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.826549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.826589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.826677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.826704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.826812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.826862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.826959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.826987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.827085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.827123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.827199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.827225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.827311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.827348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.827460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.827486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.827571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.827598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.827684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.827711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.827823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.827855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.827938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.827964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.828048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.828074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.828162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.828189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.828279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.828307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.828393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.828419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.828511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.828537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.828617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.828645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.828740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.828779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.783 [2024-11-06 09:05:11.828894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 [2024-11-06 09:05:11.828926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.783 [2024-11-06 09:05:11.829016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.783 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.783 [2024-11-06 09:05:11.829045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.783 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.829127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.829153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.784 [2024-11-06 09:05:11.829237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.829263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.784 [2024-11-06 09:05:11.829351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.829379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.829472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.829500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.829599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.829628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.829717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.829744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.829828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.829864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.829947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.829973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.830062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.830087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.830181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.830206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.830317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.830343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.830422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.830448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.830531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.830558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.830638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.830665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.830766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.830794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.830886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.830913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.830993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.831019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.831115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.831142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.831257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.831285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.831370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.831396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.831483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.831509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.831601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.831629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.831719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.831747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.831847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.831874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.831963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.831990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.832080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.832106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad0000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.832185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.832213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.832302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.832328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1853fa0 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.832411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.832439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.832530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.832556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6acc000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.832643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:58.784 [2024-11-06 09:05:11.832671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6ad8000b90 with addr=10.0.0.2, port=4420 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 [2024-11-06 09:05:11.833043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.784 [2024-11-06 09:05:11.835418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.784 [2024-11-06 09:05:11.835545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.784 [2024-11-06 09:05:11.835573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.784 [2024-11-06 09:05:11.835589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.784 [2024-11-06 09:05:11.835602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.784 [2024-11-06 09:05:11.835636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.784 qpair failed and we were unable to recover it. 00:28:58.784 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.784 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:58.785 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.785 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.785 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.785 09:05:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 939110 00:28:58.785 [2024-11-06 09:05:11.845278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.845374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.845402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.845417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.845429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.845471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.855268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.855353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.855379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.855395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.855408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.855437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.865310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.865403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.865429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.865444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.865456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.865484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.875248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.875335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.875365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.875386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.875398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.875428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.885342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.885428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.885455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.885469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.885482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.885511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.895336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.895431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.895456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.895470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.895482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.895511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.905324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.905419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.905448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.905465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.905477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.905508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.915409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.915503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.915528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.915543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.915558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.915595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.925422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.925508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.925533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.925547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.925559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.925589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.935389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.935466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.935491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.935505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.935518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.935547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.945419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.945507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.945533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.945548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.945560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.945590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.955425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.955506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.955530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.955544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.955557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.955586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.785 qpair failed and we were unable to recover it. 00:28:58.785 [2024-11-06 09:05:11.965454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.785 [2024-11-06 09:05:11.965534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.785 [2024-11-06 09:05:11.965558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.785 [2024-11-06 09:05:11.965572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.785 [2024-11-06 09:05:11.965584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.785 [2024-11-06 09:05:11.965614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.786 qpair failed and we were unable to recover it. 00:28:58.786 [2024-11-06 09:05:11.975549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.786 [2024-11-06 09:05:11.975631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.786 [2024-11-06 09:05:11.975655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.786 [2024-11-06 09:05:11.975669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.786 [2024-11-06 09:05:11.975681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.786 [2024-11-06 09:05:11.975710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.786 qpair failed and we were unable to recover it. 00:28:58.786 [2024-11-06 09:05:11.985562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.786 [2024-11-06 09:05:11.985664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.786 [2024-11-06 09:05:11.985690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.786 [2024-11-06 09:05:11.985705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.786 [2024-11-06 09:05:11.985717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.786 [2024-11-06 09:05:11.985747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.786 qpair failed and we were unable to recover it. 00:28:58.786 [2024-11-06 09:05:11.995582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.786 [2024-11-06 09:05:11.995672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.786 [2024-11-06 09:05:11.995698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.786 [2024-11-06 09:05:11.995712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.786 [2024-11-06 09:05:11.995725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:58.786 [2024-11-06 09:05:11.995754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.786 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.005580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.005663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.005699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.005715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.005728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.005757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.015621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.015706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.015731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.015745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.015757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.015786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.025626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.025716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.025740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.025754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.025766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.025795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.035655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.035737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.035765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.035780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.035792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.035821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.045700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.045784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.045817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.045843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.045866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.045898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.055714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.055799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.055825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.055849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.055863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.055893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.065732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.065822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.065853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.065869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.065881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.065910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.075780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.075876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.075905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.075920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.075932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.075962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.085890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.085970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.085995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.086009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.086022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.086051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.095912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.046 [2024-11-06 09:05:12.095996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.046 [2024-11-06 09:05:12.096021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.046 [2024-11-06 09:05:12.096034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.046 [2024-11-06 09:05:12.096047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.046 [2024-11-06 09:05:12.096077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.046 qpair failed and we were unable to recover it. 00:28:59.046 [2024-11-06 09:05:12.105887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.105974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.105999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.106013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.106025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.106054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.115876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.115968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.115993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.116008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.116020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.116049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.125966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.126059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.126086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.126101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.126113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.126142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.135938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.136020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.136050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.136065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.136077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.136107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.145975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.146062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.146086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.146101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.146113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.146142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.156010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.156097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.156130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.156147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.156160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.156190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.166009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.166099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.166124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.166138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.166151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.166180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.176092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.176175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.176199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.176213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.176231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.176261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.186154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.186248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.186275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.186290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.186302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.186343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.196094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.196183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.196209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.196224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.196236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.196265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.206125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.206208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.206234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.206248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.206260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.047 [2024-11-06 09:05:12.206289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.047 qpair failed and we were unable to recover it. 00:28:59.047 [2024-11-06 09:05:12.216163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.047 [2024-11-06 09:05:12.216246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.047 [2024-11-06 09:05:12.216271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.047 [2024-11-06 09:05:12.216285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.047 [2024-11-06 09:05:12.216297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.216327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.226205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.226294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.226318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.226332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.226345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.226374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.236294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.236381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.236420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.236435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.236447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.236477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.246250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.246345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.246371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.246385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.246397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.246426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.256309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.256390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.256415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.256429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.256441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.256470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.266304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.266395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.266425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.266440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.266452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.266481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.276337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.276419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.276443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.276458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.276470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.276498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.286364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.286445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.286470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.286484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.286496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.286525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.296467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.296557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.296587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.296601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.296614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.296643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.306482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.306571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.306595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.306615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.306628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.306657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.316483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.316563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.316588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.316602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.316614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.316643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.048 [2024-11-06 09:05:12.326480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.048 [2024-11-06 09:05:12.326570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.048 [2024-11-06 09:05:12.326596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.048 [2024-11-06 09:05:12.326610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.048 [2024-11-06 09:05:12.326622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.048 [2024-11-06 09:05:12.326651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.048 qpair failed and we were unable to recover it. 00:28:59.308 [2024-11-06 09:05:12.336541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.308 [2024-11-06 09:05:12.336630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.308 [2024-11-06 09:05:12.336658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.308 [2024-11-06 09:05:12.336676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.308 [2024-11-06 09:05:12.336688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.308 [2024-11-06 09:05:12.336730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.308 qpair failed and we were unable to recover it. 00:28:59.308 [2024-11-06 09:05:12.346548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.308 [2024-11-06 09:05:12.346651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.308 [2024-11-06 09:05:12.346677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.308 [2024-11-06 09:05:12.346691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.308 [2024-11-06 09:05:12.346704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.308 [2024-11-06 09:05:12.346742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.308 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.356619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.356708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.356752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.356768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.356780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.356843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.366625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.366713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.366743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.366758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.366770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.366799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.376651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.376730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.376755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.376768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.376781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.376822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.386689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.386791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.386819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.386843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.386857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.386899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.396701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.396793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.396818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.396839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.396853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.396883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.406766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.406860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.406884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.406898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.406911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.406941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.416741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.416823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.416853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.416868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.416881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.416911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.426860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.426994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.427019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.427034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.427046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.427075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.436800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.436901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.436925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.436945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.436959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.436988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.446907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.446994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.447018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.447033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.447045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.447074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.456849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.456932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.456957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.456971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.456982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.309 [2024-11-06 09:05:12.457012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.309 qpair failed and we were unable to recover it. 00:28:59.309 [2024-11-06 09:05:12.466904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.309 [2024-11-06 09:05:12.466993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.309 [2024-11-06 09:05:12.467018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.309 [2024-11-06 09:05:12.467032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.309 [2024-11-06 09:05:12.467043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.467072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.476927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.477014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.477039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.477053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.477066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.477100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.486984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.487071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.487095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.487109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.487121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.487150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.497030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.497118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.497143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.497157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.497169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.497198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.507030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.507122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.507148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.507163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.507174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.507204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.517049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.517139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.517167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.517183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.517196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.517226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.527072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.527174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.527200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.527214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.527226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.527255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.537088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.537182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.537206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.537219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.537231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.537261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.547133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.547221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.547244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.547259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.547271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.547300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.557161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.557294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.557320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.557334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.557346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.557376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.567212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.567303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.567336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.567353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.567365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.567407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.577214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.577309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.577334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.310 [2024-11-06 09:05:12.577349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.310 [2024-11-06 09:05:12.577361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.310 [2024-11-06 09:05:12.577390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.310 qpair failed and we were unable to recover it. 00:28:59.310 [2024-11-06 09:05:12.587272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.310 [2024-11-06 09:05:12.587361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.310 [2024-11-06 09:05:12.587387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.311 [2024-11-06 09:05:12.587401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.311 [2024-11-06 09:05:12.587413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.311 [2024-11-06 09:05:12.587442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.311 qpair failed and we were unable to recover it. 00:28:59.570 [2024-11-06 09:05:12.597302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.570 [2024-11-06 09:05:12.597387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.570 [2024-11-06 09:05:12.597412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.570 [2024-11-06 09:05:12.597426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.570 [2024-11-06 09:05:12.597438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.570 [2024-11-06 09:05:12.597467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.570 qpair failed and we were unable to recover it. 00:28:59.570 [2024-11-06 09:05:12.607354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.570 [2024-11-06 09:05:12.607439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.570 [2024-11-06 09:05:12.607463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.570 [2024-11-06 09:05:12.607477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.570 [2024-11-06 09:05:12.607495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.570 [2024-11-06 09:05:12.607525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.570 qpair failed and we were unable to recover it. 00:28:59.570 [2024-11-06 09:05:12.617702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.570 [2024-11-06 09:05:12.617810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.570 [2024-11-06 09:05:12.617851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.570 [2024-11-06 09:05:12.617866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.570 [2024-11-06 09:05:12.617878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.617909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.627552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.627664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.627689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.627703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.627715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.627744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.637471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.637561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.637590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.637606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.637618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.637648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.647501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.647591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.647616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.647630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.647642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.647671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.657476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.657557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.657582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.657595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.657607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.657636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.667533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.667622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.667646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.667660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.667672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.667701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.677509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.677596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.677621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.677635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.677647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.677675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.687565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.687696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.687722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.687736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.687749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.687789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.697575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.697657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.697692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.697707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.697720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.697749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.707598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.707685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.707709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.707722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.707734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.707764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.717614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.717701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.717725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.717738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.717750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.717779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.727646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.727733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.727763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.727777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.727789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.571 [2024-11-06 09:05:12.727818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.571 qpair failed and we were unable to recover it. 00:28:59.571 [2024-11-06 09:05:12.737714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.571 [2024-11-06 09:05:12.737803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.571 [2024-11-06 09:05:12.737828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.571 [2024-11-06 09:05:12.737854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.571 [2024-11-06 09:05:12.737873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.737903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.747696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.747839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.747865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.747880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.747892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.747921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.757724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.757808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.757841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.757858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.757871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.757901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.767854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.767984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.768010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.768024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.768037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.768067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.777794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.777884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.777911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.777926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.777938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.777968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.787844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.787945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.787969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.787983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.787995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.788024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.797824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.797918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.797944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.797958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.797970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.797999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.807887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.807972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.807997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.808011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.808023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.808052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.817897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.817985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.818010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.818024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.818036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.818065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.828012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.828108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.828141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.828155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.828168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.828197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.837974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.838081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.838106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.838120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.838133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.838173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.847968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.848072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.848099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.848113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.572 [2024-11-06 09:05:12.848125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.572 [2024-11-06 09:05:12.848155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.572 qpair failed and we were unable to recover it. 00:28:59.572 [2024-11-06 09:05:12.857994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.572 [2024-11-06 09:05:12.858076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.572 [2024-11-06 09:05:12.858101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.572 [2024-11-06 09:05:12.858116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.573 [2024-11-06 09:05:12.858127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.573 [2024-11-06 09:05:12.858157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.573 qpair failed and we were unable to recover it. 00:28:59.832 [2024-11-06 09:05:12.868074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.832 [2024-11-06 09:05:12.868161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.832 [2024-11-06 09:05:12.868189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.832 [2024-11-06 09:05:12.868209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.832 [2024-11-06 09:05:12.868222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.832 [2024-11-06 09:05:12.868252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-11-06 09:05:12.878097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.832 [2024-11-06 09:05:12.878212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.832 [2024-11-06 09:05:12.878239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.832 [2024-11-06 09:05:12.878253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.832 [2024-11-06 09:05:12.878265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.832 [2024-11-06 09:05:12.878294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-11-06 09:05:12.888083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.832 [2024-11-06 09:05:12.888164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.832 [2024-11-06 09:05:12.888188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.832 [2024-11-06 09:05:12.888203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.832 [2024-11-06 09:05:12.888215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.832 [2024-11-06 09:05:12.888244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-11-06 09:05:12.898102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.832 [2024-11-06 09:05:12.898201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.832 [2024-11-06 09:05:12.898227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.832 [2024-11-06 09:05:12.898241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.832 [2024-11-06 09:05:12.898253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.832 [2024-11-06 09:05:12.898281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-11-06 09:05:12.908268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.832 [2024-11-06 09:05:12.908359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.832 [2024-11-06 09:05:12.908388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.832 [2024-11-06 09:05:12.908405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.832 [2024-11-06 09:05:12.908417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.832 [2024-11-06 09:05:12.908453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-11-06 09:05:12.918251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.832 [2024-11-06 09:05:12.918336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.832 [2024-11-06 09:05:12.918360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.832 [2024-11-06 09:05:12.918374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.832 [2024-11-06 09:05:12.918395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.832 [2024-11-06 09:05:12.918424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.832 qpair failed and we were unable to recover it. 00:28:59.832 [2024-11-06 09:05:12.928216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.832 [2024-11-06 09:05:12.928304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.832 [2024-11-06 09:05:12.928328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.832 [2024-11-06 09:05:12.928342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.832 [2024-11-06 09:05:12.928355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.832 [2024-11-06 09:05:12.928383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:12.938229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:12.938316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:12.938341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:12.938355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:12.938367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:12.938396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:12.948249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:12.948339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:12.948362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:12.948376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:12.948388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:12.948417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:12.958319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:12.958411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:12.958435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:12.958449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:12.958461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:12.958490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:12.968345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:12.968426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:12.968451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:12.968465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:12.968477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:12.968519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:12.978319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:12.978403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:12.978427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:12.978440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:12.978452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:12.978481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:12.988396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:12.988486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:12.988510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:12.988524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:12.988536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:12.988565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:12.998388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:12.998472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:12.998496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:12.998515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:12.998528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:12.998557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:13.008409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:13.008498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:13.008522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:13.008536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:13.008548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:13.008577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:13.018472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:13.018596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:13.018621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:13.018636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:13.018648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:13.018677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:13.028527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:13.028618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:13.028641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:13.028655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:13.028667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:13.028696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:13.038547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:13.038656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:13.038681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:13.038696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:13.038709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.833 [2024-11-06 09:05:13.038743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.833 qpair failed and we were unable to recover it. 00:28:59.833 [2024-11-06 09:05:13.048527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.833 [2024-11-06 09:05:13.048621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.833 [2024-11-06 09:05:13.048647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.833 [2024-11-06 09:05:13.048660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.833 [2024-11-06 09:05:13.048672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.834 [2024-11-06 09:05:13.048701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-11-06 09:05:13.058590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.834 [2024-11-06 09:05:13.058670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.834 [2024-11-06 09:05:13.058694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.834 [2024-11-06 09:05:13.058708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.834 [2024-11-06 09:05:13.058720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.834 [2024-11-06 09:05:13.058748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-11-06 09:05:13.068625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.834 [2024-11-06 09:05:13.068717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.834 [2024-11-06 09:05:13.068741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.834 [2024-11-06 09:05:13.068755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.834 [2024-11-06 09:05:13.068767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.834 [2024-11-06 09:05:13.068796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-11-06 09:05:13.078635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.834 [2024-11-06 09:05:13.078736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.834 [2024-11-06 09:05:13.078761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.834 [2024-11-06 09:05:13.078776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.834 [2024-11-06 09:05:13.078788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.834 [2024-11-06 09:05:13.078817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-11-06 09:05:13.088650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.834 [2024-11-06 09:05:13.088740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.834 [2024-11-06 09:05:13.088771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.834 [2024-11-06 09:05:13.088787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.834 [2024-11-06 09:05:13.088799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.834 [2024-11-06 09:05:13.088829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-11-06 09:05:13.098676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.834 [2024-11-06 09:05:13.098752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.834 [2024-11-06 09:05:13.098776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.834 [2024-11-06 09:05:13.098791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.834 [2024-11-06 09:05:13.098803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.834 [2024-11-06 09:05:13.098840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-11-06 09:05:13.108706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.834 [2024-11-06 09:05:13.108793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.834 [2024-11-06 09:05:13.108817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.834 [2024-11-06 09:05:13.108839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.834 [2024-11-06 09:05:13.108854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.834 [2024-11-06 09:05:13.108884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.834 qpair failed and we were unable to recover it. 00:28:59.834 [2024-11-06 09:05:13.118721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:59.834 [2024-11-06 09:05:13.118806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:59.834 [2024-11-06 09:05:13.118839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:59.834 [2024-11-06 09:05:13.118856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.834 [2024-11-06 09:05:13.118868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:28:59.834 [2024-11-06 09:05:13.118898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.834 qpair failed and we were unable to recover it. 00:29:00.093 [2024-11-06 09:05:13.128764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.093 [2024-11-06 09:05:13.128880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.093 [2024-11-06 09:05:13.128911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.093 [2024-11-06 09:05:13.128927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.093 [2024-11-06 09:05:13.128939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.093 [2024-11-06 09:05:13.128968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.093 qpair failed and we were unable to recover it. 00:29:00.093 [2024-11-06 09:05:13.138806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.093 [2024-11-06 09:05:13.138921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.093 [2024-11-06 09:05:13.138947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.093 [2024-11-06 09:05:13.138961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.093 [2024-11-06 09:05:13.138973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.093 [2024-11-06 09:05:13.139002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.093 qpair failed and we were unable to recover it. 00:29:00.093 [2024-11-06 09:05:13.148878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.093 [2024-11-06 09:05:13.148966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.093 [2024-11-06 09:05:13.148991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.093 [2024-11-06 09:05:13.149004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.093 [2024-11-06 09:05:13.149017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.093 [2024-11-06 09:05:13.149046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.093 qpair failed and we were unable to recover it. 00:29:00.093 [2024-11-06 09:05:13.158870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.093 [2024-11-06 09:05:13.158978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.093 [2024-11-06 09:05:13.159007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.093 [2024-11-06 09:05:13.159023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.093 [2024-11-06 09:05:13.159036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.093 [2024-11-06 09:05:13.159066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.093 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.168876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.168958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.168982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.168996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.169014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.169045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.178918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.179046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.179073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.179087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.179099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.179128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.188956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.189045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.189069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.189083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.189096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.189125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.198996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.199092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.199117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.199131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.199144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.199173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.209038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.209125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.209150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.209163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.209175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.209205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.219002] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.219098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.219122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.219136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.219148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.219178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.229125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.229250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.229275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.229290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.229302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.229331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.239096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.239211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.239237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.239251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.239263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.239292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.249118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.249196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.249220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.249234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.249246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.249287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.259128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.259221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.259252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.259267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.259280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.259309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.269149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.269242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.269267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.269281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.269293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.094 [2024-11-06 09:05:13.269322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.094 qpair failed and we were unable to recover it. 00:29:00.094 [2024-11-06 09:05:13.279201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.094 [2024-11-06 09:05:13.279283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.094 [2024-11-06 09:05:13.279307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.094 [2024-11-06 09:05:13.279321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.094 [2024-11-06 09:05:13.279333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.279374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.289226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.289308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.289332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.289346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.289358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.289387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.299391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.299477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.299502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.299516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.299533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.299563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.309365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.309461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.309487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.309501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.309513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.309542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.319314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.319399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.319423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.319437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.319449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.319477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.329325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.329404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.329429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.329443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.329456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.329485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.339360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.339451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.339475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.339489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.339501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.339530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.349536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.349659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.349688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.349704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.349716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.349746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.359442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.359538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.359567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.359582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.359594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.359623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.369466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.369548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.369576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.369591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.369603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.369633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.095 [2024-11-06 09:05:13.379479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.095 [2024-11-06 09:05:13.379560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.095 [2024-11-06 09:05:13.379586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.095 [2024-11-06 09:05:13.379600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.095 [2024-11-06 09:05:13.379611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.095 [2024-11-06 09:05:13.379640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.095 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.389539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-11-06 09:05:13.389679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-11-06 09:05:13.389706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-11-06 09:05:13.389721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-11-06 09:05:13.389734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.355 [2024-11-06 09:05:13.389775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.399558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-11-06 09:05:13.399654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-11-06 09:05:13.399680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-11-06 09:05:13.399694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-11-06 09:05:13.399706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.355 [2024-11-06 09:05:13.399747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.409615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-11-06 09:05:13.409714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-11-06 09:05:13.409741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-11-06 09:05:13.409755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-11-06 09:05:13.409768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.355 [2024-11-06 09:05:13.409809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.419577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-11-06 09:05:13.419665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-11-06 09:05:13.419689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-11-06 09:05:13.419704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-11-06 09:05:13.419716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.355 [2024-11-06 09:05:13.419745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.429653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-11-06 09:05:13.429772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-11-06 09:05:13.429797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-11-06 09:05:13.429818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-11-06 09:05:13.429839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.355 [2024-11-06 09:05:13.429872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.439652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-11-06 09:05:13.439738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-11-06 09:05:13.439763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-11-06 09:05:13.439777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-11-06 09:05:13.439789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.355 [2024-11-06 09:05:13.439819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.449696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-11-06 09:05:13.449781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-11-06 09:05:13.449806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-11-06 09:05:13.449820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-11-06 09:05:13.449839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.355 [2024-11-06 09:05:13.449870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.459820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-11-06 09:05:13.459958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-11-06 09:05:13.459985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-11-06 09:05:13.459999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-11-06 09:05:13.460011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.355 [2024-11-06 09:05:13.460041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.469773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.355 [2024-11-06 09:05:13.469877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.355 [2024-11-06 09:05:13.469904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.355 [2024-11-06 09:05:13.469921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.355 [2024-11-06 09:05:13.469933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.355 [2024-11-06 09:05:13.469969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.355 qpair failed and we were unable to recover it. 00:29:00.355 [2024-11-06 09:05:13.479852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.479988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.480014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.480029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.480041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.480070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.489779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.489877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.489901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.489915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.489927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.489956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.499804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.499897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.499921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.499935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.499948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.499977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.509887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.509998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.510024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.510038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.510050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.510079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.519885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.519980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.520004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.520017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.520029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.520059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.529935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.530023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.530049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.530062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.530075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.530104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.539957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.540082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.540107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.540122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.540135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.540164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.550018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.550107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.550132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.550146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.550158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.550187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.560072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.560159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.560184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.560204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.560217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.560258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.570074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.570156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.570181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.570195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.570207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.570236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.580061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.580143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.580169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.580183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.580195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.580237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.356 qpair failed and we were unable to recover it. 00:29:00.356 [2024-11-06 09:05:13.590148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.356 [2024-11-06 09:05:13.590255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.356 [2024-11-06 09:05:13.590281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.356 [2024-11-06 09:05:13.590295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.356 [2024-11-06 09:05:13.590307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.356 [2024-11-06 09:05:13.590336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.357 qpair failed and we were unable to recover it. 00:29:00.357 [2024-11-06 09:05:13.600126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.357 [2024-11-06 09:05:13.600211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.357 [2024-11-06 09:05:13.600236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.357 [2024-11-06 09:05:13.600250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.357 [2024-11-06 09:05:13.600263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.357 [2024-11-06 09:05:13.600299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.357 qpair failed and we were unable to recover it. 00:29:00.357 [2024-11-06 09:05:13.610175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.357 [2024-11-06 09:05:13.610257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.357 [2024-11-06 09:05:13.610285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.357 [2024-11-06 09:05:13.610300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.357 [2024-11-06 09:05:13.610312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.357 [2024-11-06 09:05:13.610341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.357 qpair failed and we were unable to recover it. 00:29:00.357 [2024-11-06 09:05:13.620155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.357 [2024-11-06 09:05:13.620281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.357 [2024-11-06 09:05:13.620307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.357 [2024-11-06 09:05:13.620321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.357 [2024-11-06 09:05:13.620333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.357 [2024-11-06 09:05:13.620362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.357 qpair failed and we were unable to recover it. 00:29:00.357 [2024-11-06 09:05:13.630187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.357 [2024-11-06 09:05:13.630316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.357 [2024-11-06 09:05:13.630342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.357 [2024-11-06 09:05:13.630356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.357 [2024-11-06 09:05:13.630369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.357 [2024-11-06 09:05:13.630398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.357 qpair failed and we were unable to recover it. 00:29:00.357 [2024-11-06 09:05:13.640327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.357 [2024-11-06 09:05:13.640419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.357 [2024-11-06 09:05:13.640444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.357 [2024-11-06 09:05:13.640458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.357 [2024-11-06 09:05:13.640470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.357 [2024-11-06 09:05:13.640499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.357 qpair failed and we were unable to recover it. 00:29:00.616 [2024-11-06 09:05:13.650251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-11-06 09:05:13.650383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-11-06 09:05:13.650408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-11-06 09:05:13.650422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-11-06 09:05:13.650434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.616 [2024-11-06 09:05:13.650463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 [2024-11-06 09:05:13.660310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-11-06 09:05:13.660396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-11-06 09:05:13.660419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-11-06 09:05:13.660434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-11-06 09:05:13.660446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.616 [2024-11-06 09:05:13.660474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 [2024-11-06 09:05:13.670356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-11-06 09:05:13.670448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-11-06 09:05:13.670473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-11-06 09:05:13.670487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-11-06 09:05:13.670500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.616 [2024-11-06 09:05:13.670530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 [2024-11-06 09:05:13.680365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-11-06 09:05:13.680453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-11-06 09:05:13.680477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-11-06 09:05:13.680491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-11-06 09:05:13.680503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.616 [2024-11-06 09:05:13.680531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 [2024-11-06 09:05:13.690391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-11-06 09:05:13.690469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-11-06 09:05:13.690497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-11-06 09:05:13.690512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-11-06 09:05:13.690524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.616 [2024-11-06 09:05:13.690553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 [2024-11-06 09:05:13.700413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-11-06 09:05:13.700499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-11-06 09:05:13.700526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-11-06 09:05:13.700541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.616 [2024-11-06 09:05:13.700554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.616 [2024-11-06 09:05:13.700583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.616 qpair failed and we were unable to recover it. 00:29:00.616 [2024-11-06 09:05:13.710462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.616 [2024-11-06 09:05:13.710563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.616 [2024-11-06 09:05:13.710589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.616 [2024-11-06 09:05:13.710603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.710614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.710643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.720497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.720586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.720610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.720624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.720636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.720665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.730498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.730624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.730650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.730665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.730682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.730712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.740505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.740633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.740658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.740672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.740684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.740714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.750568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.750658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.750681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.750695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.750707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.750736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.760591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.760719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.760745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.760759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.760772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.760801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.770591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.770673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.770698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.770711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.770724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.770752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.780628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.780756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.780782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.780796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.780808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.780844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.790662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.790777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.790804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.790818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.790839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.790870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.800670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.800747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.800772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.800785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.800798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.800827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.810707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.810821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.810860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.810876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.810888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.810917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.820710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.820790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.820821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.820844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.820858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.820887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.830865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.830984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.831009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.831023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.831035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.831064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.840785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.840921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.840947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.840961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.840973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.841003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.850842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.850930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.850955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.850969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.850981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.851010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.860939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.861053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.861079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.861094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.861112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.861155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.870997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.871086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.871111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.871125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.871137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.871178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.880933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.881051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.617 [2024-11-06 09:05:13.881077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.617 [2024-11-06 09:05:13.881091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.617 [2024-11-06 09:05:13.881103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.617 [2024-11-06 09:05:13.881133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.617 qpair failed and we were unable to recover it. 00:29:00.617 [2024-11-06 09:05:13.890936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.617 [2024-11-06 09:05:13.891021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-11-06 09:05:13.891047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-11-06 09:05:13.891062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-11-06 09:05:13.891074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.618 [2024-11-06 09:05:13.891103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.618 [2024-11-06 09:05:13.901026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.618 [2024-11-06 09:05:13.901155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.618 [2024-11-06 09:05:13.901184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.618 [2024-11-06 09:05:13.901199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.618 [2024-11-06 09:05:13.901212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.618 [2024-11-06 09:05:13.901241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.618 qpair failed and we were unable to recover it. 00:29:00.876 [2024-11-06 09:05:13.911044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.876 [2024-11-06 09:05:13.911167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.876 [2024-11-06 09:05:13.911193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.876 [2024-11-06 09:05:13.911208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.876 [2024-11-06 09:05:13.911220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.876 [2024-11-06 09:05:13.911250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.876 qpair failed and we were unable to recover it. 00:29:00.876 [2024-11-06 09:05:13.921039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.876 [2024-11-06 09:05:13.921129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.876 [2024-11-06 09:05:13.921155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.876 [2024-11-06 09:05:13.921171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.876 [2024-11-06 09:05:13.921183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.876 [2024-11-06 09:05:13.921213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.876 qpair failed and we were unable to recover it. 00:29:00.876 [2024-11-06 09:05:13.931078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.876 [2024-11-06 09:05:13.931173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.876 [2024-11-06 09:05:13.931201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.876 [2024-11-06 09:05:13.931216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.876 [2024-11-06 09:05:13.931229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.876 [2024-11-06 09:05:13.931259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.876 qpair failed and we were unable to recover it. 00:29:00.876 [2024-11-06 09:05:13.941087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.876 [2024-11-06 09:05:13.941171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.876 [2024-11-06 09:05:13.941196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.876 [2024-11-06 09:05:13.941210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.876 [2024-11-06 09:05:13.941222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.876 [2024-11-06 09:05:13.941251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.876 qpair failed and we were unable to recover it. 00:29:00.876 [2024-11-06 09:05:13.951182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.876 [2024-11-06 09:05:13.951274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.876 [2024-11-06 09:05:13.951298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.876 [2024-11-06 09:05:13.951312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.876 [2024-11-06 09:05:13.951325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.876 [2024-11-06 09:05:13.951354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.876 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:13.961174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:13.961261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:13.961293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:13.961311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:13.961323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:13.961353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:13.971197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:13.971278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:13.971303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:13.971317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:13.971329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:13.971358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:13.981217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:13.981335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:13.981360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:13.981375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:13.981387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:13.981416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:13.991268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:13.991353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:13.991377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:13.991397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:13.991410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:13.991451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.001283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.001369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.001394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.001408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.001421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.001451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.011321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.011410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.011434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.011448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.011461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.011490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.021330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.021413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.021438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.021453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.021465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.021494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.031380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.031470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.031494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.031508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.031521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.031556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.041416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.041508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.041546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.041560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.041572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.041601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.051399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.051481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.051506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.051521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.051533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.051562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.061441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.061526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.061561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.061576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.061588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.061624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.071476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.071577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.071602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.071616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.071627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.071656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.081515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.081625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.081652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.081666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.081678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.081707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.091544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.091624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.091648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.091662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.091674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.091703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.101542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.101665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.101690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.101704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.101716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.101746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.111621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.111716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.111742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.111756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.111768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.111797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.121603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.121711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.121741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.121756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.877 [2024-11-06 09:05:14.121768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.877 [2024-11-06 09:05:14.121797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.877 qpair failed and we were unable to recover it. 00:29:00.877 [2024-11-06 09:05:14.131608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.877 [2024-11-06 09:05:14.131697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.877 [2024-11-06 09:05:14.131721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.877 [2024-11-06 09:05:14.131734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.878 [2024-11-06 09:05:14.131746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.878 [2024-11-06 09:05:14.131775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.878 qpair failed and we were unable to recover it. 00:29:00.878 [2024-11-06 09:05:14.141662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.878 [2024-11-06 09:05:14.141744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.878 [2024-11-06 09:05:14.141770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.878 [2024-11-06 09:05:14.141784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.878 [2024-11-06 09:05:14.141796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.878 [2024-11-06 09:05:14.141825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.878 qpair failed and we were unable to recover it. 00:29:00.878 [2024-11-06 09:05:14.151663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.878 [2024-11-06 09:05:14.151793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.878 [2024-11-06 09:05:14.151818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.878 [2024-11-06 09:05:14.151839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.878 [2024-11-06 09:05:14.151853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.878 [2024-11-06 09:05:14.151883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.878 qpair failed and we were unable to recover it. 00:29:00.878 [2024-11-06 09:05:14.161683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:00.878 [2024-11-06 09:05:14.161769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:00.878 [2024-11-06 09:05:14.161796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:00.878 [2024-11-06 09:05:14.161810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:00.878 [2024-11-06 09:05:14.161822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:00.878 [2024-11-06 09:05:14.161865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:00.878 qpair failed and we were unable to recover it. 00:29:01.137 [2024-11-06 09:05:14.171804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.137 [2024-11-06 09:05:14.171894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.137 [2024-11-06 09:05:14.171922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.137 [2024-11-06 09:05:14.171937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.137 [2024-11-06 09:05:14.171949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.137 [2024-11-06 09:05:14.171979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.137 qpair failed and we were unable to recover it. 00:29:01.137 [2024-11-06 09:05:14.181755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.137 [2024-11-06 09:05:14.181846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.137 [2024-11-06 09:05:14.181871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.137 [2024-11-06 09:05:14.181885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.137 [2024-11-06 09:05:14.181896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.137 [2024-11-06 09:05:14.181926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.137 qpair failed and we were unable to recover it. 00:29:01.137 [2024-11-06 09:05:14.191851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.137 [2024-11-06 09:05:14.191939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.137 [2024-11-06 09:05:14.191962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.137 [2024-11-06 09:05:14.191976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.137 [2024-11-06 09:05:14.191988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.137 [2024-11-06 09:05:14.192017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.137 qpair failed and we were unable to recover it. 00:29:01.137 [2024-11-06 09:05:14.201824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.137 [2024-11-06 09:05:14.201914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.137 [2024-11-06 09:05:14.201939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.137 [2024-11-06 09:05:14.201953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.137 [2024-11-06 09:05:14.201964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.137 [2024-11-06 09:05:14.202006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.137 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.211867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.211951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.211976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.211990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.212002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.212043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.221846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.221933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.221957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.221971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.221983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.222013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.231886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.231973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.231997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.232011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.232023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.232052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.241912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.241997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.242021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.242035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.242047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.242076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.251938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.252033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.252067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.252083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.252095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.252125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.261988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.262071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.262096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.262110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.262122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.262151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.272003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.272090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.272114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.272128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.272140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.272170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.282045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.282129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.282153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.282166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.282178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.282207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.292155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.292236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.292259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.292273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.292291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.292321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.302060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.302142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.302168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.302182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.302194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.302223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.312152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.312241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.312264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.312278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.138 [2024-11-06 09:05:14.312290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.138 [2024-11-06 09:05:14.312319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.138 qpair failed and we were unable to recover it. 00:29:01.138 [2024-11-06 09:05:14.322173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.138 [2024-11-06 09:05:14.322295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.138 [2024-11-06 09:05:14.322321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.138 [2024-11-06 09:05:14.322336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.322348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.322376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.332172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.332247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.332272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.332285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.332297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.332325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.342183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.342261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.342285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.342298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.342311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.342339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.352213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.352299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.352322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.352337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.352349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.352378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.362300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.362382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.362406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.362420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.362432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.362461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.372260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.372340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.372364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.372377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.372389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.372417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.382380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.382459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.382488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.382502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.382514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.382543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.392422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.392555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.392580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.392595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.392607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.392636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.402384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.402471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.402495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.402509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.402521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.402550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.412435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.412563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.412589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.412603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.412615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.412644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.139 [2024-11-06 09:05:14.422433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.139 [2024-11-06 09:05:14.422517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.139 [2024-11-06 09:05:14.422543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.139 [2024-11-06 09:05:14.422565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.139 [2024-11-06 09:05:14.422579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.139 [2024-11-06 09:05:14.422609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.139 qpair failed and we were unable to recover it. 00:29:01.399 [2024-11-06 09:05:14.432493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.399 [2024-11-06 09:05:14.432585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.399 [2024-11-06 09:05:14.432609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.399 [2024-11-06 09:05:14.432623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.399 [2024-11-06 09:05:14.432635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.399 [2024-11-06 09:05:14.432664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.399 qpair failed and we were unable to recover it. 00:29:01.399 [2024-11-06 09:05:14.442485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.399 [2024-11-06 09:05:14.442572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.399 [2024-11-06 09:05:14.442598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.399 [2024-11-06 09:05:14.442612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.399 [2024-11-06 09:05:14.442625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.399 [2024-11-06 09:05:14.442665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.399 qpair failed and we were unable to recover it. 00:29:01.399 [2024-11-06 09:05:14.452523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.399 [2024-11-06 09:05:14.452609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.399 [2024-11-06 09:05:14.452635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.399 [2024-11-06 09:05:14.452650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.399 [2024-11-06 09:05:14.452662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.399 [2024-11-06 09:05:14.452690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.399 qpair failed and we were unable to recover it. 00:29:01.399 [2024-11-06 09:05:14.462533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.399 [2024-11-06 09:05:14.462620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.399 [2024-11-06 09:05:14.462644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.399 [2024-11-06 09:05:14.462658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.399 [2024-11-06 09:05:14.462670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.399 [2024-11-06 09:05:14.462699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.399 qpair failed and we were unable to recover it. 00:29:01.399 [2024-11-06 09:05:14.472566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.399 [2024-11-06 09:05:14.472656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.399 [2024-11-06 09:05:14.472680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.399 [2024-11-06 09:05:14.472693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.472706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.472734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.482569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.482651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.482677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.482691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.482703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.482733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.492586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.492672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.492696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.492710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.492721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.492751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.502644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.502724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.502748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.502762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.502774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.502803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.512658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.512770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.512799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.512814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.512826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.512867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.522696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.522781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.522805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.522819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.522841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.522874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.532756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.532864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.532890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.532905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.532917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.532946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.542736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.542859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.542885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.542899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.542912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.542941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.552761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.552902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.552928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.552948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.552961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.552990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.562791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.562886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.562911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.562926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.562938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.562967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.572857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.572941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.572966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.572979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.572991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.400 [2024-11-06 09:05:14.573033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.400 qpair failed and we were unable to recover it. 00:29:01.400 [2024-11-06 09:05:14.582874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.400 [2024-11-06 09:05:14.582956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.400 [2024-11-06 09:05:14.582981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.400 [2024-11-06 09:05:14.582995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.400 [2024-11-06 09:05:14.583007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.583037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.592921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.593023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.593049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.593063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.593075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.593110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.602976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.603067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.603093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.603107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.603119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.603149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.612950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.613033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.613057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.613071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.613083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.613112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.622981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.623061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.623084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.623099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.623111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.623140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.633148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.633248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.633272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.633286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.633298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.633327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.643121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.643218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.643254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.643268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.643280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.643309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.653134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.653250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.653276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.653290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.653302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.653332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.663143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.663228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.663254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.663271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.663283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.663314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.673210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.673297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.673322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.673337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.673349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.673393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.401 [2024-11-06 09:05:14.683177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.401 [2024-11-06 09:05:14.683269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.401 [2024-11-06 09:05:14.683305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.401 [2024-11-06 09:05:14.683321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.401 [2024-11-06 09:05:14.683333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.401 [2024-11-06 09:05:14.683374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.401 qpair failed and we were unable to recover it. 00:29:01.661 [2024-11-06 09:05:14.693203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.661 [2024-11-06 09:05:14.693317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.661 [2024-11-06 09:05:14.693343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.661 [2024-11-06 09:05:14.693357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.661 [2024-11-06 09:05:14.693369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.661 [2024-11-06 09:05:14.693399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.661 qpair failed and we were unable to recover it. 00:29:01.661 [2024-11-06 09:05:14.703183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.661 [2024-11-06 09:05:14.703275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.661 [2024-11-06 09:05:14.703305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.661 [2024-11-06 09:05:14.703321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.661 [2024-11-06 09:05:14.703334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.661 [2024-11-06 09:05:14.703364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.661 qpair failed and we were unable to recover it. 00:29:01.661 [2024-11-06 09:05:14.713225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.661 [2024-11-06 09:05:14.713319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.661 [2024-11-06 09:05:14.713345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.661 [2024-11-06 09:05:14.713359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.661 [2024-11-06 09:05:14.713371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.661 [2024-11-06 09:05:14.713401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.661 qpair failed and we were unable to recover it. 00:29:01.661 [2024-11-06 09:05:14.723296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.661 [2024-11-06 09:05:14.723377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.661 [2024-11-06 09:05:14.723401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.661 [2024-11-06 09:05:14.723415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.661 [2024-11-06 09:05:14.723427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.661 [2024-11-06 09:05:14.723461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.661 qpair failed and we were unable to recover it. 00:29:01.661 [2024-11-06 09:05:14.733327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.661 [2024-11-06 09:05:14.733415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.661 [2024-11-06 09:05:14.733439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.661 [2024-11-06 09:05:14.733453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.733466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.733494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.743288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.743404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.743430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.743444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.743457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.743485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.753346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.753433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.753456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.753471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.753483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.753512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.763421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.763521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.763549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.763564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.763576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.763605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.773380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.773464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.773489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.773503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.773515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.773544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.783399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.783493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.783517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.783532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.783543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.783573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.793468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.793610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.793635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.793650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.793661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.793703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.803556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.803644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.803672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.803689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.803701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.803731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.813494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.813588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.813620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.813635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.813647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.813677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.823531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.823615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.823639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.823654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.823666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.823695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.833608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.833700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.662 [2024-11-06 09:05:14.833729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.662 [2024-11-06 09:05:14.833743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.662 [2024-11-06 09:05:14.833755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.662 [2024-11-06 09:05:14.833785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.662 qpair failed and we were unable to recover it. 00:29:01.662 [2024-11-06 09:05:14.843609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.662 [2024-11-06 09:05:14.843725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.843751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.843765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.843777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.843806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.853620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.853699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.853723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.853736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.853754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.853785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.863614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.863693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.863718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.863733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.863744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.863774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.873695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.873785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.873809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.873823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.873844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.873875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.883726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.883857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.883884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.883898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.883910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.883940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.893714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.893808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.893841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.893858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.893870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.893900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.903758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.903850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.903875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.903889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.903902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.903931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.913805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.913913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.913939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.913953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.913965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.913994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.923808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.923908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.923933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.923947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.923960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.923989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.933865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.933961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.933985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.933999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.934011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.934040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.663 [2024-11-06 09:05:14.943870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.663 [2024-11-06 09:05:14.944006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.663 [2024-11-06 09:05:14.944037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.663 [2024-11-06 09:05:14.944052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.663 [2024-11-06 09:05:14.944064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.663 [2024-11-06 09:05:14.944093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.663 qpair failed and we were unable to recover it. 00:29:01.923 [2024-11-06 09:05:14.953908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.923 [2024-11-06 09:05:14.954005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.923 [2024-11-06 09:05:14.954031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.923 [2024-11-06 09:05:14.954045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.923 [2024-11-06 09:05:14.954056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.923 [2024-11-06 09:05:14.954085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-11-06 09:05:14.963969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.923 [2024-11-06 09:05:14.964063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.923 [2024-11-06 09:05:14.964089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.923 [2024-11-06 09:05:14.964103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.923 [2024-11-06 09:05:14.964115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.923 [2024-11-06 09:05:14.964144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-11-06 09:05:14.973974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.923 [2024-11-06 09:05:14.974073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.923 [2024-11-06 09:05:14.974098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.923 [2024-11-06 09:05:14.974112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.923 [2024-11-06 09:05:14.974124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.923 [2024-11-06 09:05:14.974153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-11-06 09:05:14.983968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.923 [2024-11-06 09:05:14.984054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.923 [2024-11-06 09:05:14.984080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.923 [2024-11-06 09:05:14.984100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.923 [2024-11-06 09:05:14.984112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.923 [2024-11-06 09:05:14.984141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-11-06 09:05:14.994021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.923 [2024-11-06 09:05:14.994106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.923 [2024-11-06 09:05:14.994130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.923 [2024-11-06 09:05:14.994144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.923 [2024-11-06 09:05:14.994156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.923 [2024-11-06 09:05:14.994185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-11-06 09:05:15.004021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.923 [2024-11-06 09:05:15.004102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.923 [2024-11-06 09:05:15.004126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.923 [2024-11-06 09:05:15.004140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.923 [2024-11-06 09:05:15.004152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.923 [2024-11-06 09:05:15.004180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-11-06 09:05:15.014092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.923 [2024-11-06 09:05:15.014210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.923 [2024-11-06 09:05:15.014236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.923 [2024-11-06 09:05:15.014250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.923 [2024-11-06 09:05:15.014262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.923 [2024-11-06 09:05:15.014292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-11-06 09:05:15.024168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.923 [2024-11-06 09:05:15.024251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.923 [2024-11-06 09:05:15.024275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.923 [2024-11-06 09:05:15.024288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.923 [2024-11-06 09:05:15.024300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.923 [2024-11-06 09:05:15.024330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.923 qpair failed and we were unable to recover it. 00:29:01.923 [2024-11-06 09:05:15.034182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.923 [2024-11-06 09:05:15.034269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.923 [2024-11-06 09:05:15.034294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.034308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.034320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.034349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.044205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.044293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.044318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.044331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.044344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.044373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.054165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.054248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.054272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.054286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.054298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.054327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.064183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.064262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.064287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.064301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.064314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.064343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.074343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.074471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.074497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.074511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.074524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.074553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.084281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.084362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.084387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.084401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.084413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.084442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.094290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.094372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.094397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.094411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.094423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.094452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.104321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.104446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.104472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.104487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.104499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.104527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.114455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.114543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.114568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.114590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.114604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.114645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.124381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.124460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.124484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.124498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.124510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.124539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.134408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.134499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.134528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.134545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.134558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.134588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.144505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.924 [2024-11-06 09:05:15.144589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.924 [2024-11-06 09:05:15.144614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.924 [2024-11-06 09:05:15.144629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.924 [2024-11-06 09:05:15.144641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.924 [2024-11-06 09:05:15.144670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.924 qpair failed and we were unable to recover it. 00:29:01.924 [2024-11-06 09:05:15.154455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.925 [2024-11-06 09:05:15.154542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.925 [2024-11-06 09:05:15.154568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.925 [2024-11-06 09:05:15.154583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.925 [2024-11-06 09:05:15.154596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.925 [2024-11-06 09:05:15.154630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-11-06 09:05:15.164503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.925 [2024-11-06 09:05:15.164608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.925 [2024-11-06 09:05:15.164634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.925 [2024-11-06 09:05:15.164648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.925 [2024-11-06 09:05:15.164661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.925 [2024-11-06 09:05:15.164690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-11-06 09:05:15.174500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.925 [2024-11-06 09:05:15.174631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.925 [2024-11-06 09:05:15.174657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.925 [2024-11-06 09:05:15.174672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.925 [2024-11-06 09:05:15.174684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.925 [2024-11-06 09:05:15.174714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-11-06 09:05:15.184632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.925 [2024-11-06 09:05:15.184753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.925 [2024-11-06 09:05:15.184779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.925 [2024-11-06 09:05:15.184793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.925 [2024-11-06 09:05:15.184804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.925 [2024-11-06 09:05:15.184852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-11-06 09:05:15.194594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.925 [2024-11-06 09:05:15.194680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.925 [2024-11-06 09:05:15.194705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.925 [2024-11-06 09:05:15.194719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.925 [2024-11-06 09:05:15.194731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.925 [2024-11-06 09:05:15.194772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.925 qpair failed and we were unable to recover it. 00:29:01.925 [2024-11-06 09:05:15.204685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:01.925 [2024-11-06 09:05:15.204772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:01.925 [2024-11-06 09:05:15.204797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:01.925 [2024-11-06 09:05:15.204811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:01.925 [2024-11-06 09:05:15.204824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:01.925 [2024-11-06 09:05:15.204863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:01.925 qpair failed and we were unable to recover it. 00:29:02.184 [2024-11-06 09:05:15.214643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.184 [2024-11-06 09:05:15.214728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.184 [2024-11-06 09:05:15.214753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.184 [2024-11-06 09:05:15.214767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.184 [2024-11-06 09:05:15.214779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.184 [2024-11-06 09:05:15.214808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.184 qpair failed and we were unable to recover it. 00:29:02.184 [2024-11-06 09:05:15.224646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.184 [2024-11-06 09:05:15.224735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.184 [2024-11-06 09:05:15.224760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.184 [2024-11-06 09:05:15.224774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.184 [2024-11-06 09:05:15.224786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.184 [2024-11-06 09:05:15.224815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.184 qpair failed and we were unable to recover it. 00:29:02.184 [2024-11-06 09:05:15.234670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.184 [2024-11-06 09:05:15.234761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.184 [2024-11-06 09:05:15.234786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.184 [2024-11-06 09:05:15.234800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.184 [2024-11-06 09:05:15.234812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.184 [2024-11-06 09:05:15.234849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.184 qpair failed and we were unable to recover it. 00:29:02.184 [2024-11-06 09:05:15.244702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.184 [2024-11-06 09:05:15.244786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.184 [2024-11-06 09:05:15.244816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.184 [2024-11-06 09:05:15.244840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.184 [2024-11-06 09:05:15.244856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.184 [2024-11-06 09:05:15.244886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.184 qpair failed and we were unable to recover it. 00:29:02.184 [2024-11-06 09:05:15.254761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.184 [2024-11-06 09:05:15.254854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.184 [2024-11-06 09:05:15.254879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.184 [2024-11-06 09:05:15.254893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.184 [2024-11-06 09:05:15.254905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.184 [2024-11-06 09:05:15.254934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.184 qpair failed and we were unable to recover it. 00:29:02.184 [2024-11-06 09:05:15.264759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.184 [2024-11-06 09:05:15.264848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.264874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.264888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.264900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.264929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.274797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.274892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.274918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.274932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.274944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.274973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.284915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.285002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.285031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.285046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.285063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.285094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.294893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.294980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.295004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.295018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.295030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.295059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.304902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.304987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.305013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.305027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.305039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.305068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.314940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.315047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.315073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.315087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.315099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.315128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.325025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.325114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.325140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.325155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.325167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.325196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.334954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.335078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.335103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.335118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.335130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.335160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.344982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.345112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.345138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.345152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.345164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.345192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.355035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.355126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.355150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.355164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.355176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.355205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.365091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.185 [2024-11-06 09:05:15.365175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.185 [2024-11-06 09:05:15.365199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.185 [2024-11-06 09:05:15.365213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.185 [2024-11-06 09:05:15.365225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.185 [2024-11-06 09:05:15.365266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.185 qpair failed and we were unable to recover it. 00:29:02.185 [2024-11-06 09:05:15.375200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.375282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.375323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.375339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.375351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.375380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.186 [2024-11-06 09:05:15.385133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.385262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.385292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.385308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.385320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.385349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.186 [2024-11-06 09:05:15.395162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.395251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.395275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.395290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.395302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.395332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.186 [2024-11-06 09:05:15.405181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.405267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.405291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.405305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.405318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.405347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.186 [2024-11-06 09:05:15.415221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.415327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.415352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.415366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.415384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.415413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.186 [2024-11-06 09:05:15.425241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.425370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.425395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.425411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.425423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.425453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.186 [2024-11-06 09:05:15.435268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.435378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.435405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.435420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.435432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.435462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.186 [2024-11-06 09:05:15.445308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.445397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.445422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.445436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.445447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.445477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.186 [2024-11-06 09:05:15.455315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.455396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.455420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.455434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.455446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.455475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.186 [2024-11-06 09:05:15.465340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.186 [2024-11-06 09:05:15.465423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.186 [2024-11-06 09:05:15.465448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.186 [2024-11-06 09:05:15.465462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.186 [2024-11-06 09:05:15.465474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.186 [2024-11-06 09:05:15.465515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.186 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.475383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.475476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.475500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.475517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.475530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.447 [2024-11-06 09:05:15.475572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.447 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.485408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.485524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.485550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.485564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.485577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.447 [2024-11-06 09:05:15.485617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.447 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.495441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.495564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.495589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.495603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.495615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.447 [2024-11-06 09:05:15.495644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.447 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.505505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.505592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.505624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.505642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.505656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.447 [2024-11-06 09:05:15.505688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.447 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.515493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.515580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.515605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.515619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.515631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.447 [2024-11-06 09:05:15.515661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.447 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.525532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.525618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.525642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.525656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.525668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.447 [2024-11-06 09:05:15.525697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.447 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.535508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.535625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.535651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.535665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.535677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.447 [2024-11-06 09:05:15.535707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.447 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.545533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.545615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.545639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.545658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.545671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.447 [2024-11-06 09:05:15.545700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.447 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.555601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.555687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.555711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.555725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.555738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.447 [2024-11-06 09:05:15.555766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.447 qpair failed and we were unable to recover it. 00:29:02.447 [2024-11-06 09:05:15.565626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.447 [2024-11-06 09:05:15.565717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.447 [2024-11-06 09:05:15.565753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.447 [2024-11-06 09:05:15.565767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.447 [2024-11-06 09:05:15.565780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.565809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.575636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.575717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.575743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.575757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.575769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.575799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.585748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.585857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.585890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.585906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.585918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.585947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.595695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.595784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.595810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.595825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.595846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.595876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.605771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.605879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.605906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.605921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.605933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.605962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.615869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.615992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.616019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.616034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.616046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.616092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.625852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.625936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.625962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.625976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.625989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.626032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.635853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.635958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.635984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.635998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.636011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.636041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.645862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.645971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.646000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.646016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.646029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.646059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.655918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.656003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.656028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.656042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.656055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.656085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.665917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.666009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.666034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.666048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.666061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.666090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.675968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.676063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.676088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.448 [2024-11-06 09:05:15.676107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.448 [2024-11-06 09:05:15.676121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.448 [2024-11-06 09:05:15.676150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.448 qpair failed and we were unable to recover it. 00:29:02.448 [2024-11-06 09:05:15.686059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.448 [2024-11-06 09:05:15.686158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.448 [2024-11-06 09:05:15.686184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.449 [2024-11-06 09:05:15.686198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.449 [2024-11-06 09:05:15.686211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.449 [2024-11-06 09:05:15.686240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.449 qpair failed and we were unable to recover it. 00:29:02.449 [2024-11-06 09:05:15.695986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.449 [2024-11-06 09:05:15.696069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.449 [2024-11-06 09:05:15.696093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.449 [2024-11-06 09:05:15.696108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.449 [2024-11-06 09:05:15.696120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.449 [2024-11-06 09:05:15.696149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.449 qpair failed and we were unable to recover it. 00:29:02.449 [2024-11-06 09:05:15.706034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.449 [2024-11-06 09:05:15.706114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.449 [2024-11-06 09:05:15.706139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.449 [2024-11-06 09:05:15.706153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.449 [2024-11-06 09:05:15.706166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.449 [2024-11-06 09:05:15.706194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.449 qpair failed and we were unable to recover it. 00:29:02.449 [2024-11-06 09:05:15.716060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.449 [2024-11-06 09:05:15.716188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.449 [2024-11-06 09:05:15.716215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.449 [2024-11-06 09:05:15.716230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.449 [2024-11-06 09:05:15.716242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.449 [2024-11-06 09:05:15.716278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.449 qpair failed and we were unable to recover it. 00:29:02.449 [2024-11-06 09:05:15.726095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.449 [2024-11-06 09:05:15.726191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.449 [2024-11-06 09:05:15.726220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.449 [2024-11-06 09:05:15.726236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.449 [2024-11-06 09:05:15.726248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.449 [2024-11-06 09:05:15.726277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.449 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.736109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.736193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.736218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.736232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.736244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.736273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.746242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.746322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.746348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.746362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.746374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.746415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.756211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.756298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.756323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.756337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.756350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.756391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.766250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.766379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.766409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.766425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.766437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.766468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.776273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.776382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.776409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.776424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.776436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.776465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.786284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.786368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.786393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.786406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.786418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.786448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.796318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.796405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.796430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.796443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.796455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.796484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.806378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.806486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.806519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.806535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.806547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.806576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.816326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.816412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.816437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.816452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.816463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.816492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.710 [2024-11-06 09:05:15.826478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.710 [2024-11-06 09:05:15.826561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.710 [2024-11-06 09:05:15.826586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.710 [2024-11-06 09:05:15.826600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.710 [2024-11-06 09:05:15.826612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.710 [2024-11-06 09:05:15.826641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.710 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.836404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.836489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.836514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.836528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.836540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.836569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.846419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.846502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.846526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.846540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.846557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.846587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.856439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.856571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.856597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.856611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.856623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.856652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.866488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.866619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.866644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.866658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.866670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.866699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.876526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.876608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.876632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.876646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.876657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.876686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.886625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.886746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.886772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.886787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.886799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.886846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.896616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.896710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.896736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.896750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.896763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.896804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.906616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.906702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.906726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.906741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.906753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.906781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.916645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.916733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.916757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.916771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.916783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.916811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.926795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.926888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.926914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.926929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.926941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.926971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.936715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.936816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.936856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.711 [2024-11-06 09:05:15.936873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.711 [2024-11-06 09:05:15.936885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.711 [2024-11-06 09:05:15.936914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.711 qpair failed and we were unable to recover it. 00:29:02.711 [2024-11-06 09:05:15.946707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.711 [2024-11-06 09:05:15.946790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.711 [2024-11-06 09:05:15.946814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.712 [2024-11-06 09:05:15.946829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.712 [2024-11-06 09:05:15.946851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.712 [2024-11-06 09:05:15.946881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.712 qpair failed and we were unable to recover it. 00:29:02.712 [2024-11-06 09:05:15.956843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.712 [2024-11-06 09:05:15.956934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.712 [2024-11-06 09:05:15.956958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.712 [2024-11-06 09:05:15.956973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.712 [2024-11-06 09:05:15.956985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.712 [2024-11-06 09:05:15.957014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.712 qpair failed and we were unable to recover it. 00:29:02.712 [2024-11-06 09:05:15.966755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.712 [2024-11-06 09:05:15.966885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.712 [2024-11-06 09:05:15.966911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.712 [2024-11-06 09:05:15.966925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.712 [2024-11-06 09:05:15.966937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.712 [2024-11-06 09:05:15.966967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.712 qpair failed and we were unable to recover it. 00:29:02.712 [2024-11-06 09:05:15.976806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.712 [2024-11-06 09:05:15.976902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.712 [2024-11-06 09:05:15.976927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.712 [2024-11-06 09:05:15.976942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.712 [2024-11-06 09:05:15.976959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.712 [2024-11-06 09:05:15.976990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.712 qpair failed and we were unable to recover it. 00:29:02.712 [2024-11-06 09:05:15.986852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.712 [2024-11-06 09:05:15.986956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.712 [2024-11-06 09:05:15.986982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.712 [2024-11-06 09:05:15.986996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.712 [2024-11-06 09:05:15.987008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.712 [2024-11-06 09:05:15.987037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.712 qpair failed and we were unable to recover it. 00:29:02.712 [2024-11-06 09:05:15.996864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:02.712 [2024-11-06 09:05:15.996953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:02.712 [2024-11-06 09:05:15.996978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:02.712 [2024-11-06 09:05:15.996993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:02.712 [2024-11-06 09:05:15.997005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:02.712 [2024-11-06 09:05:15.997034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:02.712 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.006909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.006995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.007020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.007035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.007046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.032 [2024-11-06 09:05:16.007076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.016969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.017062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.017087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.017101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.017114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.032 [2024-11-06 09:05:16.017142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.026962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.027043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.027071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.027086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.027098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.032 [2024-11-06 09:05:16.027128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.037052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.037144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.037168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.037182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.037194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.032 [2024-11-06 09:05:16.037223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.047010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.047118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.047144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.047158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.047170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.032 [2024-11-06 09:05:16.047199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.057030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.057114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.057141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.057156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.057168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.032 [2024-11-06 09:05:16.057197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.067056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.067139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.067171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.067186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.067198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.032 [2024-11-06 09:05:16.067227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.077125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.077221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.077247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.077261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.077273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.032 [2024-11-06 09:05:16.077302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.087239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.087368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.087394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.087408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.087420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.032 [2024-11-06 09:05:16.087449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.032 qpair failed and we were unable to recover it. 00:29:03.032 [2024-11-06 09:05:16.097149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.032 [2024-11-06 09:05:16.097236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.032 [2024-11-06 09:05:16.097261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.032 [2024-11-06 09:05:16.097275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.032 [2024-11-06 09:05:16.097286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.097315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.107310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.107391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.107416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.107436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.107448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.107477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.117203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.117289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.117313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.117327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.117339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.117367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.127226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.127324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.127349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.127364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.127376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.127404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.137259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.137361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.137386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.137400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.137412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.137442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.147273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.147351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.147375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.147389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.147401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.147436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.157365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.157456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.157482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.157496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.157508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.157536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.167338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.167422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.167447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.167461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.167473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.167502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.177432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.177519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.177542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.177556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.177569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.177598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.187402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.187492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.187516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.187530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.187542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.187570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.197553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.197657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.197681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.197695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.197707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.197748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.207454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.207596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.033 [2024-11-06 09:05:16.207622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.033 [2024-11-06 09:05:16.207636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.033 [2024-11-06 09:05:16.207648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.033 [2024-11-06 09:05:16.207677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.033 qpair failed and we were unable to recover it. 00:29:03.033 [2024-11-06 09:05:16.217495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.033 [2024-11-06 09:05:16.217581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.217607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.217621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.217633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.217662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.227526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.227608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.227633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.227647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.227659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.227688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.237554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.237667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.237693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.237712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.237726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.237755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.247593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.247677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.247701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.247715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.247728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.247757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.257625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.257709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.257733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.257748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.257760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.257789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.267621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.267701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.267727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.267741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.267753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.267781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.277667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.277755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.277780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.277794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.277806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.277855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.287741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.287855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.287881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.287896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.287908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.287938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.297742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.297828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.297860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.297875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.297887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.297916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.307763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.307886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.307916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.307932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.307945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.307975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.034 [2024-11-06 09:05:16.317871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.034 [2024-11-06 09:05:16.317960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.034 [2024-11-06 09:05:16.317984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.034 [2024-11-06 09:05:16.317998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.034 [2024-11-06 09:05:16.318010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.034 [2024-11-06 09:05:16.318040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.034 qpair failed and we were unable to recover it. 00:29:03.293 [2024-11-06 09:05:16.327800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.293 [2024-11-06 09:05:16.327892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.293 [2024-11-06 09:05:16.327917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.293 [2024-11-06 09:05:16.327932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.293 [2024-11-06 09:05:16.327944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.293 [2024-11-06 09:05:16.327973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.293 qpair failed and we were unable to recover it. 00:29:03.293 [2024-11-06 09:05:16.337861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.293 [2024-11-06 09:05:16.337946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.293 [2024-11-06 09:05:16.337970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.293 [2024-11-06 09:05:16.337985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.293 [2024-11-06 09:05:16.337997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.293 [2024-11-06 09:05:16.338040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.293 qpair failed and we were unable to recover it. 00:29:03.293 [2024-11-06 09:05:16.347885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.293 [2024-11-06 09:05:16.347967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.293 [2024-11-06 09:05:16.347991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.293 [2024-11-06 09:05:16.348005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.293 [2024-11-06 09:05:16.348017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.293 [2024-11-06 09:05:16.348047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.293 qpair failed and we were unable to recover it. 00:29:03.293 [2024-11-06 09:05:16.357933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.293 [2024-11-06 09:05:16.358040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.358069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.358085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.358097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.358138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.368006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.368095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.368135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.368151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.368163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.368192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.377955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.378084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.378109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.378123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.378135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.378164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.387992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.388078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.388110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.388126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.388138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.388168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.398156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.398284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.398310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.398324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.398337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.398366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.408158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.408288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.408313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.408328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.408345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.408375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.418053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.418136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.418160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.418174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.418186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.418215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.428077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.428159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.428184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.428197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.428210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.428239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.438139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.438255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.438281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.438295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.438307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.438336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.448154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.448237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.448262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.448276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.448287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.448316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.458217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.458333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.458359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.458373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.458385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.458414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.294 [2024-11-06 09:05:16.468273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.294 [2024-11-06 09:05:16.468361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.294 [2024-11-06 09:05:16.468391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.294 [2024-11-06 09:05:16.468405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.294 [2024-11-06 09:05:16.468417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.294 [2024-11-06 09:05:16.468446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.294 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.478377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.478504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.478528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.478542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.478554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.478582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.488360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.488452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.488477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.488491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.488504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.488532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.498351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.498447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.498481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.498496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.498509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.498538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.508334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.508413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.508437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.508451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.508462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.508504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.518435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.518569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.518594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.518608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.518620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.518650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.528378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.528466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.528490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.528504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.528516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.528544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.538448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.538571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.538597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.538611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.538629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.538660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.548440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.548522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.548545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.548560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.548572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.548613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.558464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.558552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.558577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.558591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.558603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.558632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.568549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.568638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.568662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.568676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.568688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.568717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.295 [2024-11-06 09:05:16.578505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.295 [2024-11-06 09:05:16.578602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.295 [2024-11-06 09:05:16.578627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.295 [2024-11-06 09:05:16.578641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.295 [2024-11-06 09:05:16.578653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.295 [2024-11-06 09:05:16.578682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.295 qpair failed and we were unable to recover it. 00:29:03.554 [2024-11-06 09:05:16.588584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.554 [2024-11-06 09:05:16.588700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.554 [2024-11-06 09:05:16.588725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.554 [2024-11-06 09:05:16.588739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.554 [2024-11-06 09:05:16.588751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.554 [2024-11-06 09:05:16.588780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.554 qpair failed and we were unable to recover it. 00:29:03.554 [2024-11-06 09:05:16.598561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.554 [2024-11-06 09:05:16.598648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.554 [2024-11-06 09:05:16.598673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.554 [2024-11-06 09:05:16.598686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.554 [2024-11-06 09:05:16.598698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.554 [2024-11-06 09:05:16.598727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.554 qpair failed and we were unable to recover it. 00:29:03.554 [2024-11-06 09:05:16.608584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.554 [2024-11-06 09:05:16.608669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.554 [2024-11-06 09:05:16.608693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.554 [2024-11-06 09:05:16.608706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.554 [2024-11-06 09:05:16.608718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.554 [2024-11-06 09:05:16.608747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.554 qpair failed and we were unable to recover it. 00:29:03.554 [2024-11-06 09:05:16.618623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.554 [2024-11-06 09:05:16.618720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.554 [2024-11-06 09:05:16.618746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.554 [2024-11-06 09:05:16.618760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.554 [2024-11-06 09:05:16.618772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.554 [2024-11-06 09:05:16.618801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.554 qpair failed and we were unable to recover it. 00:29:03.554 [2024-11-06 09:05:16.628705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.554 [2024-11-06 09:05:16.628813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.554 [2024-11-06 09:05:16.628851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.554 [2024-11-06 09:05:16.628867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.554 [2024-11-06 09:05:16.628879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.554 [2024-11-06 09:05:16.628908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.554 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.638688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.638801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.638826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.638849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.638862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.638891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.648710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.648797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.648822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.648842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.648856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.648885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.658766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.658895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.658921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.658935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.658947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.658976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.668756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.668839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.668872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.668894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.668907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.668937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.678794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.678890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.678915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.678928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.678941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.678970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.688846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.688926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.688951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.688964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.688977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.689018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.698879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.698966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.698990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.699004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.699015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.699044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.708898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.708986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.709011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.709025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.709037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.709072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.718906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.718996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.719025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.719039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.719051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.719080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.728945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.729023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.729047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.729061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.729073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.729102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.738973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.739072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.739097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.739112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.739124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.739153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.749027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.749113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.749137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.749151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.749163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.555 [2024-11-06 09:05:16.749192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.555 qpair failed and we were unable to recover it. 00:29:03.555 [2024-11-06 09:05:16.759063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.555 [2024-11-06 09:05:16.759161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.555 [2024-11-06 09:05:16.759190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.555 [2024-11-06 09:05:16.759204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.555 [2024-11-06 09:05:16.759216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.556 [2024-11-06 09:05:16.759244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.556 qpair failed and we were unable to recover it. 00:29:03.556 [2024-11-06 09:05:16.769107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.556 [2024-11-06 09:05:16.769226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.556 [2024-11-06 09:05:16.769251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.556 [2024-11-06 09:05:16.769265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.556 [2024-11-06 09:05:16.769278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.556 [2024-11-06 09:05:16.769307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.556 qpair failed and we were unable to recover it. 00:29:03.556 [2024-11-06 09:05:16.779105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.556 [2024-11-06 09:05:16.779193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.556 [2024-11-06 09:05:16.779216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.556 [2024-11-06 09:05:16.779230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.556 [2024-11-06 09:05:16.779242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.556 [2024-11-06 09:05:16.779271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.556 qpair failed and we were unable to recover it. 00:29:03.556 [2024-11-06 09:05:16.789149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.556 [2024-11-06 09:05:16.789229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.556 [2024-11-06 09:05:16.789255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.556 [2024-11-06 09:05:16.789269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.556 [2024-11-06 09:05:16.789281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.556 [2024-11-06 09:05:16.789309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.556 qpair failed and we were unable to recover it. 00:29:03.556 [2024-11-06 09:05:16.799189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.556 [2024-11-06 09:05:16.799319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.556 [2024-11-06 09:05:16.799347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.556 [2024-11-06 09:05:16.799370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.556 [2024-11-06 09:05:16.799383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.556 [2024-11-06 09:05:16.799413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.556 qpair failed and we were unable to recover it. 00:29:03.556 [2024-11-06 09:05:16.809166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.556 [2024-11-06 09:05:16.809243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.556 [2024-11-06 09:05:16.809268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.556 [2024-11-06 09:05:16.809282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.556 [2024-11-06 09:05:16.809294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.556 [2024-11-06 09:05:16.809323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.556 qpair failed and we were unable to recover it. 00:29:03.556 [2024-11-06 09:05:16.819244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.556 [2024-11-06 09:05:16.819326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.556 [2024-11-06 09:05:16.819351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.556 [2024-11-06 09:05:16.819365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.556 [2024-11-06 09:05:16.819377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.556 [2024-11-06 09:05:16.819406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.556 qpair failed and we were unable to recover it. 00:29:03.556 [2024-11-06 09:05:16.829233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.556 [2024-11-06 09:05:16.829316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.556 [2024-11-06 09:05:16.829341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.556 [2024-11-06 09:05:16.829355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.556 [2024-11-06 09:05:16.829367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.556 [2024-11-06 09:05:16.829396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.556 qpair failed and we were unable to recover it. 00:29:03.556 [2024-11-06 09:05:16.839266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.556 [2024-11-06 09:05:16.839356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.556 [2024-11-06 09:05:16.839381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.556 [2024-11-06 09:05:16.839395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.556 [2024-11-06 09:05:16.839407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.556 [2024-11-06 09:05:16.839441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.556 qpair failed and we were unable to recover it. 00:29:03.814 [2024-11-06 09:05:16.849288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.814 [2024-11-06 09:05:16.849393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.814 [2024-11-06 09:05:16.849420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.814 [2024-11-06 09:05:16.849434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.814 [2024-11-06 09:05:16.849446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.814 [2024-11-06 09:05:16.849474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.814 qpair failed and we were unable to recover it. 00:29:03.814 [2024-11-06 09:05:16.859346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.814 [2024-11-06 09:05:16.859461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.814 [2024-11-06 09:05:16.859486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.814 [2024-11-06 09:05:16.859500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.814 [2024-11-06 09:05:16.859512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.814 [2024-11-06 09:05:16.859541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.814 qpair failed and we were unable to recover it. 00:29:03.814 [2024-11-06 09:05:16.869333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.814 [2024-11-06 09:05:16.869414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.814 [2024-11-06 09:05:16.869439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.814 [2024-11-06 09:05:16.869452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.814 [2024-11-06 09:05:16.869464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.814 [2024-11-06 09:05:16.869493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.814 qpair failed and we were unable to recover it. 00:29:03.814 [2024-11-06 09:05:16.879406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.814 [2024-11-06 09:05:16.879511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.814 [2024-11-06 09:05:16.879540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.814 [2024-11-06 09:05:16.879555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.814 [2024-11-06 09:05:16.879568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.814 [2024-11-06 09:05:16.879597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.814 qpair failed and we were unable to recover it. 00:29:03.814 [2024-11-06 09:05:16.889438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.814 [2024-11-06 09:05:16.889540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.814 [2024-11-06 09:05:16.889570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.814 [2024-11-06 09:05:16.889585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.814 [2024-11-06 09:05:16.889597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.814 [2024-11-06 09:05:16.889627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.814 qpair failed and we were unable to recover it. 00:29:03.814 [2024-11-06 09:05:16.899453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.814 [2024-11-06 09:05:16.899570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.814 [2024-11-06 09:05:16.899595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.814 [2024-11-06 09:05:16.899610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.814 [2024-11-06 09:05:16.899622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.814 [2024-11-06 09:05:16.899651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.814 qpair failed and we were unable to recover it. 00:29:03.814 [2024-11-06 09:05:16.909534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.814 [2024-11-06 09:05:16.909617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.814 [2024-11-06 09:05:16.909642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.814 [2024-11-06 09:05:16.909656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.814 [2024-11-06 09:05:16.909669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.814 [2024-11-06 09:05:16.909698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.814 qpair failed and we were unable to recover it. 00:29:03.814 [2024-11-06 09:05:16.919485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.814 [2024-11-06 09:05:16.919577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.814 [2024-11-06 09:05:16.919605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.814 [2024-11-06 09:05:16.919619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.814 [2024-11-06 09:05:16.919631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.814 [2024-11-06 09:05:16.919660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.814 qpair failed and we were unable to recover it. 00:29:03.814 [2024-11-06 09:05:16.929610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.814 [2024-11-06 09:05:16.929700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.814 [2024-11-06 09:05:16.929729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:16.929744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:16.929756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:16.929786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:16.939574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:16.939676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:16.939701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:16.939716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:16.939728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:16.939757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:16.949566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:16.949643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:16.949667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:16.949681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:16.949693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:16.949721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:16.959608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:16.959695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:16.959719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:16.959733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:16.959745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:16.959774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:16.969643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:16.969765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:16.969791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:16.969805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:16.969823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:16.969862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:16.979659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:16.979743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:16.979768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:16.979781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:16.979793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:16.979822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:16.989697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:16.989778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:16.989801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:16.989815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:16.989827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:16.989866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:16.999731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:16.999816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:16.999848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:16.999863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:16.999875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:16.999904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:17.009885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:17.010011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:17.010036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:17.010050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:17.010062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:17.010091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:17.019780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:17.019873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:17.019898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:17.019913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:17.019928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:17.019958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:17.029844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:17.029930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:17.029954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:17.029967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:17.029979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:17.030009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:17.039877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:17.039966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:17.039990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:17.040004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:17.040016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:17.040044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:17.049911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:17.049999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:17.050025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:17.050040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.815 [2024-11-06 09:05:17.050052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.815 [2024-11-06 09:05:17.050081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.815 qpair failed and we were unable to recover it. 00:29:03.815 [2024-11-06 09:05:17.059927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.815 [2024-11-06 09:05:17.060006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.815 [2024-11-06 09:05:17.060036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.815 [2024-11-06 09:05:17.060050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.816 [2024-11-06 09:05:17.060062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.816 [2024-11-06 09:05:17.060092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.816 qpair failed and we were unable to recover it. 00:29:03.816 [2024-11-06 09:05:17.070004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.816 [2024-11-06 09:05:17.070089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.816 [2024-11-06 09:05:17.070116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.816 [2024-11-06 09:05:17.070130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.816 [2024-11-06 09:05:17.070144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.816 [2024-11-06 09:05:17.070188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.816 qpair failed and we were unable to recover it. 00:29:03.816 [2024-11-06 09:05:17.080075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.816 [2024-11-06 09:05:17.080175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.816 [2024-11-06 09:05:17.080202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.816 [2024-11-06 09:05:17.080216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.816 [2024-11-06 09:05:17.080228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.816 [2024-11-06 09:05:17.080269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.816 qpair failed and we were unable to recover it. 00:29:03.816 [2024-11-06 09:05:17.090031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.816 [2024-11-06 09:05:17.090119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.816 [2024-11-06 09:05:17.090143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.816 [2024-11-06 09:05:17.090158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.816 [2024-11-06 09:05:17.090170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.816 [2024-11-06 09:05:17.090198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.816 qpair failed and we were unable to recover it. 00:29:03.816 [2024-11-06 09:05:17.100000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:03.816 [2024-11-06 09:05:17.100080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:03.816 [2024-11-06 09:05:17.100104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:03.816 [2024-11-06 09:05:17.100118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:03.816 [2024-11-06 09:05:17.100136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:03.816 [2024-11-06 09:05:17.100166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:03.816 qpair failed and we were unable to recover it. 00:29:04.074 [2024-11-06 09:05:17.110130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.074 [2024-11-06 09:05:17.110217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.074 [2024-11-06 09:05:17.110241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.074 [2024-11-06 09:05:17.110255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.074 [2024-11-06 09:05:17.110267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.074 [2024-11-06 09:05:17.110296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.074 qpair failed and we were unable to recover it. 00:29:04.074 [2024-11-06 09:05:17.120152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.074 [2024-11-06 09:05:17.120239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.074 [2024-11-06 09:05:17.120263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.074 [2024-11-06 09:05:17.120277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.074 [2024-11-06 09:05:17.120289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.074 [2024-11-06 09:05:17.120318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.074 qpair failed and we were unable to recover it. 00:29:04.074 [2024-11-06 09:05:17.130110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.074 [2024-11-06 09:05:17.130195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.130220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.130235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.130247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.130276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.140145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.140229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.140253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.140266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.140278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.140307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.150244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.150369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.150395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.150409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.150421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.150450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.160195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.160297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.160323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.160337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.160349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.160377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.170226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.170315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.170342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.170357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.170369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.170398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.180234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.180316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.180341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.180355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.180368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.180397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.190236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.190318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.190350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.190364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.190377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.190405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.200322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.200421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.200449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.200466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.200480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.200509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.210353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.210451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.210476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.210490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.210502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.210531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.220356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.220440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.220464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.220478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.220490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.220518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.230383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.230502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.230529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.230549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.230561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.230603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.240433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.240520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.240545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.240559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.240571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.240600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.250474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.250563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.075 [2024-11-06 09:05:17.250587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.075 [2024-11-06 09:05:17.250602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.075 [2024-11-06 09:05:17.250615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.075 [2024-11-06 09:05:17.250656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.075 qpair failed and we were unable to recover it. 00:29:04.075 [2024-11-06 09:05:17.260469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.075 [2024-11-06 09:05:17.260589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.260615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.260629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.260641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.260671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.270526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.270611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.270635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.270649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.270662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.270696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.280513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.280599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.280623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.280636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.280649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.280678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.290627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.290713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.290737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.290750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.290762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.290791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.300575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.300700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.300726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.300741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.300753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.300781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.310592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.310677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.310702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.310716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.310728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.310757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.320613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.320704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.320728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.320742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.320755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.320784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.330645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.330730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.330754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.330768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.330780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.330809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.340652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.340734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.340757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.340771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.340783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.340813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.350710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.350790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.350815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.350830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.350852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.350881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.076 [2024-11-06 09:05:17.360721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.076 [2024-11-06 09:05:17.360809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.076 [2024-11-06 09:05:17.360839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.076 [2024-11-06 09:05:17.360861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.076 [2024-11-06 09:05:17.360875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.076 [2024-11-06 09:05:17.360905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.076 qpair failed and we were unable to recover it. 00:29:04.335 [2024-11-06 09:05:17.370808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.335 [2024-11-06 09:05:17.370912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.335 [2024-11-06 09:05:17.370942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.335 [2024-11-06 09:05:17.370958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.335 [2024-11-06 09:05:17.370970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.335 [2024-11-06 09:05:17.371000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.335 qpair failed and we were unable to recover it. 00:29:04.335 [2024-11-06 09:05:17.380800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.335 [2024-11-06 09:05:17.380942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.335 [2024-11-06 09:05:17.380968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.335 [2024-11-06 09:05:17.380982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.335 [2024-11-06 09:05:17.380995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.335 [2024-11-06 09:05:17.381023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.335 qpair failed and we were unable to recover it. 00:29:04.335 [2024-11-06 09:05:17.390800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.335 [2024-11-06 09:05:17.390892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.335 [2024-11-06 09:05:17.390918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.335 [2024-11-06 09:05:17.390932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.335 [2024-11-06 09:05:17.390944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.335 [2024-11-06 09:05:17.390973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.335 qpair failed and we were unable to recover it. 00:29:04.335 [2024-11-06 09:05:17.400866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.335 [2024-11-06 09:05:17.400954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.335 [2024-11-06 09:05:17.400978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.335 [2024-11-06 09:05:17.400991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.335 [2024-11-06 09:05:17.401004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.335 [2024-11-06 09:05:17.401038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.335 qpair failed and we were unable to recover it. 00:29:04.335 [2024-11-06 09:05:17.410881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.335 [2024-11-06 09:05:17.410969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.335 [2024-11-06 09:05:17.410994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.335 [2024-11-06 09:05:17.411009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.335 [2024-11-06 09:05:17.411021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.335 [2024-11-06 09:05:17.411050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.335 qpair failed and we were unable to recover it. 00:29:04.335 [2024-11-06 09:05:17.420931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.335 [2024-11-06 09:05:17.421030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.335 [2024-11-06 09:05:17.421056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.335 [2024-11-06 09:05:17.421071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.335 [2024-11-06 09:05:17.421083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.335 [2024-11-06 09:05:17.421125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.335 qpair failed and we were unable to recover it. 00:29:04.335 [2024-11-06 09:05:17.430945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.335 [2024-11-06 09:05:17.431067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.335 [2024-11-06 09:05:17.431093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.335 [2024-11-06 09:05:17.431107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.335 [2024-11-06 09:05:17.431119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.335 [2024-11-06 09:05:17.431150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.335 qpair failed and we were unable to recover it. 00:29:04.335 [2024-11-06 09:05:17.440966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.335 [2024-11-06 09:05:17.441058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.335 [2024-11-06 09:05:17.441084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.335 [2024-11-06 09:05:17.441099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.335 [2024-11-06 09:05:17.441111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.441141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.451011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.451098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.451123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.451137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.451149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.451178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.461126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.461213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.461238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.461252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.461264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.461293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.471045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.471130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.471153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.471167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.471179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.471208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.481126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.481268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.481303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.481317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.481329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.481366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.491106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.491233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.491264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.491279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.491292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.491321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.501193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.501304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.501330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.501344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.501356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.501386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.511175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.511290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.511318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.511334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.511348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.511377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.521228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.521321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.521345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.521359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.521371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.521400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.531266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.531361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.531387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.531401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.531419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.531450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.541272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.541353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.541377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.541391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.541403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.541432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.551257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.551333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.551357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.551371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.551383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.551412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.561364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.561493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.561521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.561538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.561551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.561594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.336 [2024-11-06 09:05:17.571356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.336 [2024-11-06 09:05:17.571442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.336 [2024-11-06 09:05:17.571467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.336 [2024-11-06 09:05:17.571481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.336 [2024-11-06 09:05:17.571493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.336 [2024-11-06 09:05:17.571522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.336 qpair failed and we were unable to recover it. 00:29:04.337 [2024-11-06 09:05:17.581420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.337 [2024-11-06 09:05:17.581506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.337 [2024-11-06 09:05:17.581530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.337 [2024-11-06 09:05:17.581544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.337 [2024-11-06 09:05:17.581556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.337 [2024-11-06 09:05:17.581585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.337 qpair failed and we were unable to recover it. 00:29:04.337 [2024-11-06 09:05:17.591421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.337 [2024-11-06 09:05:17.591549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.337 [2024-11-06 09:05:17.591578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.337 [2024-11-06 09:05:17.591593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.337 [2024-11-06 09:05:17.591606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.337 [2024-11-06 09:05:17.591634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.337 qpair failed and we were unable to recover it. 00:29:04.337 [2024-11-06 09:05:17.601456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.337 [2024-11-06 09:05:17.601549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.337 [2024-11-06 09:05:17.601577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.337 [2024-11-06 09:05:17.601592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.337 [2024-11-06 09:05:17.601605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.337 [2024-11-06 09:05:17.601634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.337 qpair failed and we were unable to recover it. 00:29:04.337 [2024-11-06 09:05:17.611520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.337 [2024-11-06 09:05:17.611603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.337 [2024-11-06 09:05:17.611629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.337 [2024-11-06 09:05:17.611644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.337 [2024-11-06 09:05:17.611656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.337 [2024-11-06 09:05:17.611685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.337 qpair failed and we were unable to recover it. 00:29:04.337 [2024-11-06 09:05:17.621501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.337 [2024-11-06 09:05:17.621589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.337 [2024-11-06 09:05:17.621619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.337 [2024-11-06 09:05:17.621635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.337 [2024-11-06 09:05:17.621647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.337 [2024-11-06 09:05:17.621675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.337 qpair failed and we were unable to recover it. 00:29:04.596 [2024-11-06 09:05:17.631507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.596 [2024-11-06 09:05:17.631631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.596 [2024-11-06 09:05:17.631656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.596 [2024-11-06 09:05:17.631670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.596 [2024-11-06 09:05:17.631683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.596 [2024-11-06 09:05:17.631711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-11-06 09:05:17.641590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.596 [2024-11-06 09:05:17.641679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.596 [2024-11-06 09:05:17.641705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.596 [2024-11-06 09:05:17.641723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.596 [2024-11-06 09:05:17.641735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.596 [2024-11-06 09:05:17.641765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-11-06 09:05:17.651573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.596 [2024-11-06 09:05:17.651657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.596 [2024-11-06 09:05:17.651682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.596 [2024-11-06 09:05:17.651696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.596 [2024-11-06 09:05:17.651708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.596 [2024-11-06 09:05:17.651737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-11-06 09:05:17.661589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.596 [2024-11-06 09:05:17.661674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.596 [2024-11-06 09:05:17.661700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.596 [2024-11-06 09:05:17.661715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.596 [2024-11-06 09:05:17.661732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.596 [2024-11-06 09:05:17.661762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-11-06 09:05:17.671608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.596 [2024-11-06 09:05:17.671717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.596 [2024-11-06 09:05:17.671743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.596 [2024-11-06 09:05:17.671758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.596 [2024-11-06 09:05:17.671770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.596 [2024-11-06 09:05:17.671799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-11-06 09:05:17.681648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.596 [2024-11-06 09:05:17.681772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.596 [2024-11-06 09:05:17.681799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.596 [2024-11-06 09:05:17.681813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.596 [2024-11-06 09:05:17.681825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.596 [2024-11-06 09:05:17.681862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-11-06 09:05:17.691676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.596 [2024-11-06 09:05:17.691793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.596 [2024-11-06 09:05:17.691819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.596 [2024-11-06 09:05:17.691840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.596 [2024-11-06 09:05:17.691855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.596 [2024-11-06 09:05:17.691884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.596 qpair failed and we were unable to recover it. 00:29:04.596 [2024-11-06 09:05:17.701801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.596 [2024-11-06 09:05:17.701898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.596 [2024-11-06 09:05:17.701924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.596 [2024-11-06 09:05:17.701938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.596 [2024-11-06 09:05:17.701950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.597 [2024-11-06 09:05:17.701979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.711761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.711862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.711886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.711900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.711912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.597 [2024-11-06 09:05:17.711941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.721808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.721902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.721927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.721941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.721953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.597 [2024-11-06 09:05:17.721994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.731860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.731947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.731971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.731985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.731997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.597 [2024-11-06 09:05:17.732038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.741839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.741957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.741983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.741997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.742009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.597 [2024-11-06 09:05:17.742039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.751874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.751960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.751993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.752009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.752021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.597 [2024-11-06 09:05:17.752050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.761938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.762040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.762065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.762079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.762091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.597 [2024-11-06 09:05:17.762120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.771916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.771999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.772023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.772037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.772049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.597 [2024-11-06 09:05:17.772077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.781948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.782035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.782061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.782075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.782087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad8000b90 00:29:04.597 [2024-11-06 09:05:17.782116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.792018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.792101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.792136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.792158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.792172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad0000b90 00:29:04.597 [2024-11-06 09:05:17.792203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.802033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.802128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.802160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.802176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.802188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6acc000b90 00:29:04.597 [2024-11-06 09:05:17.802220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.812048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.812135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.812164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.812179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.812192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6acc000b90 00:29:04.597 [2024-11-06 09:05:17.812222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.822090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.822179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.822207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.597 [2024-11-06 09:05:17.822222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.597 [2024-11-06 09:05:17.822234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6acc000b90 00:29:04.597 [2024-11-06 09:05:17.822267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.597 qpair failed and we were unable to recover it. 00:29:04.597 [2024-11-06 09:05:17.832109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.597 [2024-11-06 09:05:17.832200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.597 [2024-11-06 09:05:17.832233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.598 [2024-11-06 09:05:17.832249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.598 [2024-11-06 09:05:17.832262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1853fa0 00:29:04.598 [2024-11-06 09:05:17.832298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-11-06 09:05:17.842141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.598 [2024-11-06 09:05:17.842237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.598 [2024-11-06 09:05:17.842267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.598 [2024-11-06 09:05:17.842284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.598 [2024-11-06 09:05:17.842296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1853fa0 00:29:04.598 [2024-11-06 09:05:17.842326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.598 [2024-11-06 09:05:17.842435] nvme_ctrlr.c:4482:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:04.598 A controller has encountered a failure and is being reset. 00:29:04.598 [2024-11-06 09:05:17.852163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:04.598 [2024-11-06 09:05:17.852247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:04.598 [2024-11-06 09:05:17.852279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:04.598 [2024-11-06 09:05:17.852294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:04.598 [2024-11-06 09:05:17.852307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6ad0000b90 00:29:04.598 [2024-11-06 09:05:17.852338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.598 qpair failed and we were unable to recover it. 00:29:04.856 Controller properly reset. 00:29:04.856 Initializing NVMe Controllers 00:29:04.856 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:04.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:04.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:04.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:04.856 Initialization complete. Launching workers. 00:29:04.856 Starting thread on core 1 00:29:04.856 Starting thread on core 2 00:29:04.856 Starting thread on core 3 00:29:04.856 Starting thread on core 0 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:04.856 00:29:04.856 real 0m10.945s 00:29:04.856 user 0m18.744s 00:29:04.856 sys 0m5.336s 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.856 ************************************ 00:29:04.856 END TEST nvmf_target_disconnect_tc2 00:29:04.856 ************************************ 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.856 rmmod nvme_tcp 00:29:04.856 rmmod nvme_fabrics 00:29:04.856 rmmod nvme_keyring 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 939517 ']' 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 939517 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 939517 ']' 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 939517 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:04.856 09:05:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 939517 00:29:04.856 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:04.856 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:04.856 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 939517' 00:29:04.856 killing process with pid 939517 00:29:04.856 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 939517 00:29:04.856 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 939517 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.114 09:05:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.017 09:05:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.017 00:29:07.017 real 0m15.978s 00:29:07.017 user 0m45.688s 00:29:07.017 sys 0m7.464s 00:29:07.017 09:05:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.017 09:05:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:07.017 ************************************ 00:29:07.017 END TEST nvmf_target_disconnect 00:29:07.017 ************************************ 00:29:07.276 09:05:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:07.276 00:29:07.276 real 5m6.512s 00:29:07.276 user 10m46.768s 00:29:07.276 sys 1m15.792s 00:29:07.276 09:05:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.276 09:05:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.276 ************************************ 00:29:07.276 END TEST nvmf_host 00:29:07.276 ************************************ 00:29:07.276 09:05:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:07.276 09:05:20 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:07.276 09:05:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:07.276 09:05:20 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:07.276 09:05:20 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.276 09:05:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:07.276 ************************************ 00:29:07.276 START TEST nvmf_target_core_interrupt_mode 00:29:07.276 ************************************ 00:29:07.276 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:07.276 * Looking for test storage... 00:29:07.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1689 -- # lcov --version 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:07.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.277 --rc genhtml_branch_coverage=1 00:29:07.277 --rc genhtml_function_coverage=1 00:29:07.277 --rc genhtml_legend=1 00:29:07.277 --rc geninfo_all_blocks=1 00:29:07.277 --rc geninfo_unexecuted_blocks=1 00:29:07.277 00:29:07.277 ' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:07.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.277 --rc genhtml_branch_coverage=1 00:29:07.277 --rc genhtml_function_coverage=1 00:29:07.277 --rc genhtml_legend=1 00:29:07.277 --rc geninfo_all_blocks=1 00:29:07.277 --rc geninfo_unexecuted_blocks=1 00:29:07.277 00:29:07.277 ' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:07.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.277 --rc genhtml_branch_coverage=1 00:29:07.277 --rc genhtml_function_coverage=1 00:29:07.277 --rc genhtml_legend=1 00:29:07.277 --rc geninfo_all_blocks=1 00:29:07.277 --rc geninfo_unexecuted_blocks=1 00:29:07.277 00:29:07.277 ' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:07.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.277 --rc genhtml_branch_coverage=1 00:29:07.277 --rc genhtml_function_coverage=1 00:29:07.277 --rc genhtml_legend=1 00:29:07.277 --rc geninfo_all_blocks=1 00:29:07.277 --rc geninfo_unexecuted_blocks=1 00:29:07.277 00:29:07.277 ' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:07.277 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:07.278 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.278 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:07.537 ************************************ 00:29:07.537 START TEST nvmf_abort 00:29:07.537 ************************************ 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:07.537 * Looking for test storage... 00:29:07.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1689 -- # lcov --version 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:07.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.537 --rc genhtml_branch_coverage=1 00:29:07.537 --rc genhtml_function_coverage=1 00:29:07.537 --rc genhtml_legend=1 00:29:07.537 --rc geninfo_all_blocks=1 00:29:07.537 --rc geninfo_unexecuted_blocks=1 00:29:07.537 00:29:07.537 ' 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:07.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.537 --rc genhtml_branch_coverage=1 00:29:07.537 --rc genhtml_function_coverage=1 00:29:07.537 --rc genhtml_legend=1 00:29:07.537 --rc geninfo_all_blocks=1 00:29:07.537 --rc geninfo_unexecuted_blocks=1 00:29:07.537 00:29:07.537 ' 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:07.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.537 --rc genhtml_branch_coverage=1 00:29:07.537 --rc genhtml_function_coverage=1 00:29:07.537 --rc genhtml_legend=1 00:29:07.537 --rc geninfo_all_blocks=1 00:29:07.537 --rc geninfo_unexecuted_blocks=1 00:29:07.537 00:29:07.537 ' 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:07.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.537 --rc genhtml_branch_coverage=1 00:29:07.537 --rc genhtml_function_coverage=1 00:29:07.537 --rc genhtml_legend=1 00:29:07.537 --rc geninfo_all_blocks=1 00:29:07.537 --rc geninfo_unexecuted_blocks=1 00:29:07.537 00:29:07.537 ' 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.537 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.538 09:05:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.067 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:10.067 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:10.067 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:10.067 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:10.067 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:10.067 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:10.067 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:10.067 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:10.067 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:10.068 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:10.068 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:10.068 Found net devices under 0000:09:00.0: cvl_0_0 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:10.068 Found net devices under 0000:09:00.1: cvl_0_1 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:10.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:29:10.068 00:29:10.068 --- 10.0.0.2 ping statistics --- 00:29:10.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.068 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:10.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:29:10.068 00:29:10.068 --- 10.0.0.1 ping statistics --- 00:29:10.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.068 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:29:10.068 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:10.069 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.069 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:10.069 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:10.069 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.069 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:10.069 09:05:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=942444 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 942444 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 942444 ']' 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.069 [2024-11-06 09:05:23.078269] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:10.069 [2024-11-06 09:05:23.079410] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:29:10.069 [2024-11-06 09:05:23.079469] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.069 [2024-11-06 09:05:23.150774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:10.069 [2024-11-06 09:05:23.208321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.069 [2024-11-06 09:05:23.208371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.069 [2024-11-06 09:05:23.208392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.069 [2024-11-06 09:05:23.208410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.069 [2024-11-06 09:05:23.208424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.069 [2024-11-06 09:05:23.209983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:10.069 [2024-11-06 09:05:23.210037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:10.069 [2024-11-06 09:05:23.210040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.069 [2024-11-06 09:05:23.297052] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:10.069 [2024-11-06 09:05:23.297232] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:10.069 [2024-11-06 09:05:23.297254] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:10.069 [2024-11-06 09:05:23.297535] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.069 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.069 [2024-11-06 09:05:23.350765] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.327 Malloc0 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.327 Delay0 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.327 [2024-11-06 09:05:23.427003] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.327 09:05:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:10.328 [2024-11-06 09:05:23.495557] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:12.856 Initializing NVMe Controllers 00:29:12.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:12.856 controller IO queue size 128 less than required 00:29:12.856 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:12.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:12.856 Initialization complete. Launching workers. 00:29:12.856 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 25866 00:29:12.856 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25923, failed to submit 66 00:29:12.856 success 25866, unsuccessful 57, failed 0 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.856 rmmod nvme_tcp 00:29:12.856 rmmod nvme_fabrics 00:29:12.856 rmmod nvme_keyring 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 942444 ']' 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 942444 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 942444 ']' 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 942444 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 942444 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 942444' 00:29:12.856 killing process with pid 942444 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 942444 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 942444 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.856 09:05:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.759 09:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.759 00:29:14.759 real 0m7.327s 00:29:14.759 user 0m8.945s 00:29:14.759 sys 0m2.958s 00:29:14.759 09:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:14.759 09:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:14.759 ************************************ 00:29:14.760 END TEST nvmf_abort 00:29:14.760 ************************************ 00:29:14.760 09:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:14.760 09:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:14.760 09:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:14.760 09:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:14.760 ************************************ 00:29:14.760 START TEST nvmf_ns_hotplug_stress 00:29:14.760 ************************************ 00:29:14.760 09:05:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:14.760 * Looking for test storage... 00:29:14.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:14.760 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:29:14.760 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lcov --version 00:29:14.760 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:29:15.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.019 --rc genhtml_branch_coverage=1 00:29:15.019 --rc genhtml_function_coverage=1 00:29:15.019 --rc genhtml_legend=1 00:29:15.019 --rc geninfo_all_blocks=1 00:29:15.019 --rc geninfo_unexecuted_blocks=1 00:29:15.019 00:29:15.019 ' 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:29:15.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.019 --rc genhtml_branch_coverage=1 00:29:15.019 --rc genhtml_function_coverage=1 00:29:15.019 --rc genhtml_legend=1 00:29:15.019 --rc geninfo_all_blocks=1 00:29:15.019 --rc geninfo_unexecuted_blocks=1 00:29:15.019 00:29:15.019 ' 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:29:15.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.019 --rc genhtml_branch_coverage=1 00:29:15.019 --rc genhtml_function_coverage=1 00:29:15.019 --rc genhtml_legend=1 00:29:15.019 --rc geninfo_all_blocks=1 00:29:15.019 --rc geninfo_unexecuted_blocks=1 00:29:15.019 00:29:15.019 ' 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:29:15.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.019 --rc genhtml_branch_coverage=1 00:29:15.019 --rc genhtml_function_coverage=1 00:29:15.019 --rc genhtml_legend=1 00:29:15.019 --rc geninfo_all_blocks=1 00:29:15.019 --rc geninfo_unexecuted_blocks=1 00:29:15.019 00:29:15.019 ' 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.019 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.020 09:05:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:16.922 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:16.922 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:16.922 Found net devices under 0000:09:00.0: cvl_0_0 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:16.922 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:16.923 Found net devices under 0000:09:00.1: cvl_0_1 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.923 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.181 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.181 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.181 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.181 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.181 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.181 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.181 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.181 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:29:17.181 00:29:17.181 --- 10.0.0.2 ping statistics --- 00:29:17.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.181 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:29:17.181 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:29:17.181 00:29:17.181 --- 10.0.0.1 ping statistics --- 00:29:17.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.182 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=944666 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 944666 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 944666 ']' 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.182 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:17.182 [2024-11-06 09:05:30.403959] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:17.182 [2024-11-06 09:05:30.405055] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:29:17.182 [2024-11-06 09:05:30.405122] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.440 [2024-11-06 09:05:30.476238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:17.440 [2024-11-06 09:05:30.532559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.440 [2024-11-06 09:05:30.532610] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.440 [2024-11-06 09:05:30.532632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.440 [2024-11-06 09:05:30.532649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.440 [2024-11-06 09:05:30.532664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.440 [2024-11-06 09:05:30.534239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.440 [2024-11-06 09:05:30.534283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.440 [2024-11-06 09:05:30.534287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.440 [2024-11-06 09:05:30.617738] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:17.440 [2024-11-06 09:05:30.617965] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:17.440 [2024-11-06 09:05:30.617991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:17.440 [2024-11-06 09:05:30.618292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:17.440 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.440 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:29:17.440 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:17.440 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:17.440 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:17.440 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.440 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:17.440 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:17.698 [2024-11-06 09:05:30.919044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.698 09:05:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:17.956 09:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.213 [2024-11-06 09:05:31.471346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.214 09:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:18.471 09:05:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:19.037 Malloc0 00:29:19.037 09:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:19.037 Delay0 00:29:19.037 09:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.295 09:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:19.860 NULL1 00:29:19.860 09:05:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:20.117 09:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=945053 00:29:20.117 09:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:20.117 09:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:20.117 09:05:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.048 Read completed with error (sct=0, sc=11) 00:29:21.305 09:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.305 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:21.563 09:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:21.563 09:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:21.820 true 00:29:21.820 09:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:21.820 09:05:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.383 09:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:22.947 09:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:22.947 09:05:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:22.947 true 00:29:22.947 09:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:22.947 09:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.203 09:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.461 09:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:23.461 09:05:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:23.718 true 00:29:23.976 09:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:23.976 09:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.233 09:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.491 09:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:24.491 09:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:24.748 true 00:29:24.748 09:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:24.748 09:05:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:25.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.681 09:05:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:25.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:25.938 09:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:25.938 09:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:26.195 true 00:29:26.195 09:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:26.195 09:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.452 09:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.709 09:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:26.709 09:05:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:26.966 true 00:29:26.966 09:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:26.967 09:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.224 09:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.481 09:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:27.481 09:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:27.738 true 00:29:27.738 09:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:27.738 09:05:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.671 09:05:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:28.928 09:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:28.928 09:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:29.493 true 00:29:29.493 09:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:29.493 09:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.493 09:05:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.752 09:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:29.752 09:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:30.010 true 00:29:30.267 09:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:30.267 09:05:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.199 09:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:31.199 09:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:31.199 09:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:31.456 true 00:29:31.456 09:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:31.456 09:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.713 09:05:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.970 09:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:31.970 09:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:32.228 true 00:29:32.484 09:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:32.484 09:05:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.048 09:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.306 09:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:33.306 09:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:33.563 true 00:29:33.563 09:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:33.563 09:05:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.128 09:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.128 09:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:34.128 09:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:34.386 true 00:29:34.386 09:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:34.386 09:05:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:35.318 09:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:35.576 09:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:35.576 09:05:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:35.833 true 00:29:35.833 09:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:35.833 09:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.090 09:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.347 09:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:36.347 09:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:36.604 true 00:29:36.604 09:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:36.604 09:05:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.861 09:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.119 09:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:37.119 09:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:37.376 true 00:29:37.376 09:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:37.376 09:05:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.748 09:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:38.748 09:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:38.748 09:05:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:39.005 true 00:29:39.005 09:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:39.005 09:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.262 09:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.520 09:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:39.520 09:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:39.777 true 00:29:39.777 09:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:39.777 09:05:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.041 09:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.367 09:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:40.367 09:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:40.686 true 00:29:40.686 09:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:40.686 09:05:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.619 09:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:41.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:41.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:41.877 09:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:41.878 09:05:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:42.135 true 00:29:42.135 09:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:42.136 09:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.394 09:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.652 09:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:42.652 09:05:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:42.909 true 00:29:42.909 09:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:42.910 09:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:43.843 09:05:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.843 09:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:43.843 09:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:44.101 true 00:29:44.101 09:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:44.101 09:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.359 09:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:44.617 09:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:44.617 09:05:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:44.875 true 00:29:44.875 09:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:44.875 09:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:45.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.808 09:05:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:45.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:46.066 09:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:46.066 09:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:46.324 true 00:29:46.324 09:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:46.324 09:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.581 09:05:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.839 09:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:46.840 09:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:47.097 true 00:29:47.097 09:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:47.097 09:06:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.031 09:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:48.289 09:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:48.289 09:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:48.547 true 00:29:48.547 09:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:48.547 09:06:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.806 09:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.064 09:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:49.064 09:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:49.323 true 00:29:49.323 09:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:49.323 09:06:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:50.259 09:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.259 Initializing NVMe Controllers 00:29:50.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.259 Controller IO queue size 128, less than required. 00:29:50.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.259 Controller IO queue size 128, less than required. 00:29:50.259 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:50.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:50.259 Initialization complete. Launching workers. 00:29:50.259 ======================================================== 00:29:50.259 Latency(us) 00:29:50.259 Device Information : IOPS MiB/s Average min max 00:29:50.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 970.04 0.47 65371.84 2722.81 1022937.68 00:29:50.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10312.90 5.04 12411.88 1398.60 537634.96 00:29:50.259 ======================================================== 00:29:50.259 Total : 11282.95 5.51 16965.07 1398.60 1022937.68 00:29:50.259 00:29:50.517 09:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:50.517 09:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:50.775 true 00:29:50.775 09:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 945053 00:29:50.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (945053) - No such process 00:29:50.775 09:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 945053 00:29:50.775 09:06:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.032 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:51.290 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:51.290 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:51.290 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:51.290 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:51.290 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:51.549 null0 00:29:51.549 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:51.549 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:51.549 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:51.808 null1 00:29:51.808 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:51.808 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:51.808 09:06:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:52.066 null2 00:29:52.066 09:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:52.066 09:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:52.066 09:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:52.324 null3 00:29:52.324 09:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:52.324 09:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:52.324 09:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:52.582 null4 00:29:52.582 09:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:52.582 09:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:52.582 09:06:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:52.840 null5 00:29:52.840 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:52.840 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:52.840 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:53.097 null6 00:29:53.097 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:53.097 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:53.097 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:53.356 null7 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:53.356 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 949085 949086 949088 949090 949092 949094 949096 949098 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:53.357 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:53.923 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:53.923 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:53.923 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:53.923 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:53.923 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.923 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:53.923 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:53.923 09:06:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.182 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:54.440 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:54.440 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:54.440 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.440 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:54.440 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:54.440 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:54.440 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:54.440 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:54.699 09:06:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:54.958 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:54.958 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.958 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:54.958 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:54.958 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:54.958 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:54.958 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:54.958 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.217 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:55.475 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:55.475 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:55.475 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.475 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:55.475 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:55.475 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:55.475 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:55.475 09:06:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:55.733 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.733 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.733 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:55.991 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:56.249 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.249 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:56.249 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:56.249 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:56.249 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:56.249 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:56.249 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:56.249 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:56.507 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.507 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.507 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:56.508 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:56.766 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:56.766 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.766 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:56.766 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:56.766 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:56.766 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:56.766 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:56.766 09:06:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.024 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.025 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:57.283 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:57.283 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:57.283 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.283 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:57.283 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:57.283 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:57.283 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:57.283 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:57.850 09:06:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:57.850 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.108 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:58.108 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:58.108 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:58.108 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:58.108 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:58.108 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:58.108 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:58.366 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.366 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.366 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.367 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:58.625 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:58.625 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.625 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:58.625 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:58.625 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:58.625 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:58.625 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:58.625 09:06:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:58.884 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:59.142 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:59.142 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.142 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:59.142 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:59.142 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:59.142 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:59.142 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:59.142 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:59.400 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:59.400 rmmod nvme_tcp 00:29:59.658 rmmod nvme_fabrics 00:29:59.658 rmmod nvme_keyring 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 944666 ']' 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 944666 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 944666 ']' 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 944666 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 944666 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 944666' 00:29:59.658 killing process with pid 944666 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 944666 00:29:59.658 09:06:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 944666 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.918 09:06:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.822 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.822 00:30:01.822 real 0m47.107s 00:30:01.822 user 3m16.629s 00:30:01.822 sys 0m22.488s 00:30:01.822 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:01.822 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:01.822 ************************************ 00:30:01.822 END TEST nvmf_ns_hotplug_stress 00:30:01.822 ************************************ 00:30:01.822 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:01.822 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:01.822 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:01.822 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:01.822 ************************************ 00:30:01.822 START TEST nvmf_delete_subsystem 00:30:01.822 ************************************ 00:30:01.822 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:02.081 * Looking for test storage... 00:30:02.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lcov --version 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:30:02.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.081 --rc genhtml_branch_coverage=1 00:30:02.081 --rc genhtml_function_coverage=1 00:30:02.081 --rc genhtml_legend=1 00:30:02.081 --rc geninfo_all_blocks=1 00:30:02.081 --rc geninfo_unexecuted_blocks=1 00:30:02.081 00:30:02.081 ' 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:30:02.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.081 --rc genhtml_branch_coverage=1 00:30:02.081 --rc genhtml_function_coverage=1 00:30:02.081 --rc genhtml_legend=1 00:30:02.081 --rc geninfo_all_blocks=1 00:30:02.081 --rc geninfo_unexecuted_blocks=1 00:30:02.081 00:30:02.081 ' 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:30:02.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.081 --rc genhtml_branch_coverage=1 00:30:02.081 --rc genhtml_function_coverage=1 00:30:02.081 --rc genhtml_legend=1 00:30:02.081 --rc geninfo_all_blocks=1 00:30:02.081 --rc geninfo_unexecuted_blocks=1 00:30:02.081 00:30:02.081 ' 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:30:02.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:02.081 --rc genhtml_branch_coverage=1 00:30:02.081 --rc genhtml_function_coverage=1 00:30:02.081 --rc genhtml_legend=1 00:30:02.081 --rc geninfo_all_blocks=1 00:30:02.081 --rc geninfo_unexecuted_blocks=1 00:30:02.081 00:30:02.081 ' 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.081 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:02.082 09:06:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:04.613 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.613 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:04.614 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:04.614 Found net devices under 0000:09:00.0: cvl_0_0 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:04.614 Found net devices under 0000:09:00.1: cvl_0_1 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:04.614 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.614 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:30:04.614 00:30:04.614 --- 10.0.0.2 ping statistics --- 00:30:04.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.614 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.614 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:30:04.614 00:30:04.614 --- 10.0.0.1 ping statistics --- 00:30:04.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.614 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=952475 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 952475 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 952475 ']' 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:04.614 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.614 [2024-11-06 09:06:17.516998] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:04.614 [2024-11-06 09:06:17.518086] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:30:04.614 [2024-11-06 09:06:17.518139] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.614 [2024-11-06 09:06:17.587230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:04.614 [2024-11-06 09:06:17.642955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.614 [2024-11-06 09:06:17.643009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.615 [2024-11-06 09:06:17.643023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.615 [2024-11-06 09:06:17.643041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.615 [2024-11-06 09:06:17.643052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.615 [2024-11-06 09:06:17.644335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.615 [2024-11-06 09:06:17.644342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.615 [2024-11-06 09:06:17.729779] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:04.615 [2024-11-06 09:06:17.729804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:04.615 [2024-11-06 09:06:17.730048] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.615 [2024-11-06 09:06:17.784969] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.615 [2024-11-06 09:06:17.801220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.615 NULL1 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.615 Delay0 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=952496 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:04.615 09:06:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:04.615 [2024-11-06 09:06:17.882824] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:07.142 09:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:07.142 09:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.142 09:06:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.142 Write completed with error (sct=0, sc=8) 00:30:07.142 Read completed with error (sct=0, sc=8) 00:30:07.142 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 starting I/O failed: -6 00:30:07.143 starting I/O failed: -6 00:30:07.143 starting I/O failed: -6 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Write completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 starting I/O failed: -6 00:30:07.143 Read completed with error (sct=0, sc=8) 00:30:07.143 [2024-11-06 09:06:20.099925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d8c000c00 is same with the state(6) to be set 00:30:08.077 [2024-11-06 09:06:21.060443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23229a0 is same with the state(6) to be set 00:30:08.077 Write completed with error (sct=0, sc=8) 00:30:08.077 Read completed with error (sct=0, sc=8) 00:30:08.077 Read completed with error (sct=0, sc=8) 00:30:08.077 Read completed with error (sct=0, sc=8) 00:30:08.077 Write completed with error (sct=0, sc=8) 00:30:08.077 Read completed with error (sct=0, sc=8) 00:30:08.077 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 [2024-11-06 09:06:21.097914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23212c0 is same with the state(6) to be set 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 [2024-11-06 09:06:21.098147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23214a0 is same with the state(6) to be set 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 [2024-11-06 09:06:21.098371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2321860 is same with the state(6) to be set 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 Read completed with error (sct=0, sc=8) 00:30:08.078 Write completed with error (sct=0, sc=8) 00:30:08.078 [2024-11-06 09:06:21.100790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d8c00d310 is same with the state(6) to be set 00:30:08.078 Initializing NVMe Controllers 00:30:08.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:08.078 Controller IO queue size 128, less than required. 00:30:08.078 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:08.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:08.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:08.078 Initialization complete. Launching workers. 00:30:08.078 ======================================================== 00:30:08.078 Latency(us) 00:30:08.078 Device Information : IOPS MiB/s Average min max 00:30:08.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.54 0.09 959233.49 922.18 1011961.43 00:30:08.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 174.63 0.09 860987.71 657.42 1013901.99 00:30:08.078 ======================================================== 00:30:08.078 Total : 360.16 0.18 911599.17 657.42 1013901.99 00:30:08.078 00:30:08.078 [2024-11-06 09:06:21.101383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23229a0 (9): Bad file descriptor 00:30:08.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:08.078 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.078 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:08.078 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 952496 00:30:08.078 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:08.336 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 952496 00:30:08.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (952496) - No such process 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 952496 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 952496 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 952496 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:08.337 [2024-11-06 09:06:21.621175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:08.337 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:08.594 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:08.594 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=953017 00:30:08.594 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:08.594 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:08.594 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 953017 00:30:08.594 09:06:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:08.594 [2024-11-06 09:06:21.681563] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:08.878 09:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:08.878 09:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 953017 00:30:08.878 09:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:09.502 09:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:09.502 09:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 953017 00:30:09.502 09:06:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:10.118 09:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:10.118 09:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 953017 00:30:10.118 09:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:10.374 09:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:10.374 09:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 953017 00:30:10.374 09:06:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:10.938 09:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:10.938 09:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 953017 00:30:10.938 09:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:11.502 09:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:11.502 09:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 953017 00:30:11.502 09:06:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:11.760 Initializing NVMe Controllers 00:30:11.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:11.760 Controller IO queue size 128, less than required. 00:30:11.760 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:11.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:11.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:11.760 Initialization complete. Launching workers. 00:30:11.760 ======================================================== 00:30:11.760 Latency(us) 00:30:11.760 Device Information : IOPS MiB/s Average min max 00:30:11.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005592.16 1000345.04 1043615.57 00:30:11.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004250.10 1000252.20 1011360.56 00:30:11.760 ======================================================== 00:30:11.760 Total : 256.00 0.12 1004921.13 1000252.20 1043615.57 00:30:11.760 00:30:12.017 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:12.017 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 953017 00:30:12.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (953017) - No such process 00:30:12.017 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 953017 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:12.018 rmmod nvme_tcp 00:30:12.018 rmmod nvme_fabrics 00:30:12.018 rmmod nvme_keyring 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 952475 ']' 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 952475 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 952475 ']' 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 952475 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 952475 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 952475' 00:30:12.018 killing process with pid 952475 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 952475 00:30:12.018 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 952475 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.276 09:06:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:14.804 00:30:14.804 real 0m12.362s 00:30:14.804 user 0m24.715s 00:30:14.804 sys 0m3.787s 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:14.804 ************************************ 00:30:14.804 END TEST nvmf_delete_subsystem 00:30:14.804 ************************************ 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:14.804 ************************************ 00:30:14.804 START TEST nvmf_host_management 00:30:14.804 ************************************ 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:14.804 * Looking for test storage... 00:30:14.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1689 -- # lcov --version 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:14.804 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:30:14.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.805 --rc genhtml_branch_coverage=1 00:30:14.805 --rc genhtml_function_coverage=1 00:30:14.805 --rc genhtml_legend=1 00:30:14.805 --rc geninfo_all_blocks=1 00:30:14.805 --rc geninfo_unexecuted_blocks=1 00:30:14.805 00:30:14.805 ' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:30:14.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.805 --rc genhtml_branch_coverage=1 00:30:14.805 --rc genhtml_function_coverage=1 00:30:14.805 --rc genhtml_legend=1 00:30:14.805 --rc geninfo_all_blocks=1 00:30:14.805 --rc geninfo_unexecuted_blocks=1 00:30:14.805 00:30:14.805 ' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:30:14.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.805 --rc genhtml_branch_coverage=1 00:30:14.805 --rc genhtml_function_coverage=1 00:30:14.805 --rc genhtml_legend=1 00:30:14.805 --rc geninfo_all_blocks=1 00:30:14.805 --rc geninfo_unexecuted_blocks=1 00:30:14.805 00:30:14.805 ' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:30:14.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.805 --rc genhtml_branch_coverage=1 00:30:14.805 --rc genhtml_function_coverage=1 00:30:14.805 --rc genhtml_legend=1 00:30:14.805 --rc geninfo_all_blocks=1 00:30:14.805 --rc geninfo_unexecuted_blocks=1 00:30:14.805 00:30:14.805 ' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:14.805 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.806 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.806 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.806 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:14.806 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:14.806 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:14.806 09:06:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:16.707 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:16.707 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:16.707 Found net devices under 0000:09:00.0: cvl_0_0 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:16.707 Found net devices under 0000:09:00.1: cvl_0_1 00:30:16.707 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:30:16.708 00:30:16.708 --- 10.0.0.2 ping statistics --- 00:30:16.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.708 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:30:16.708 00:30:16.708 --- 10.0.0.1 ping statistics --- 00:30:16.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.708 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:16.708 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:16.966 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=955360 00:30:16.966 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:16.966 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 955360 00:30:16.966 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 955360 ']' 00:30:16.966 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.966 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.966 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.966 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.966 09:06:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:16.966 [2024-11-06 09:06:30.048386] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:16.966 [2024-11-06 09:06:30.049492] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:30:16.966 [2024-11-06 09:06:30.049557] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.966 [2024-11-06 09:06:30.123268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.966 [2024-11-06 09:06:30.183146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.966 [2024-11-06 09:06:30.183219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.966 [2024-11-06 09:06:30.183240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.966 [2024-11-06 09:06:30.183257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.966 [2024-11-06 09:06:30.183270] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.966 [2024-11-06 09:06:30.184919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.966 [2024-11-06 09:06:30.184971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.966 [2024-11-06 09:06:30.184993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:16.966 [2024-11-06 09:06:30.184996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.224 [2024-11-06 09:06:30.274643] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:17.224 [2024-11-06 09:06:30.274905] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:17.224 [2024-11-06 09:06:30.275256] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:17.225 [2024-11-06 09:06:30.275897] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:17.225 [2024-11-06 09:06:30.276200] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:17.225 [2024-11-06 09:06:30.333761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:17.225 Malloc0 00:30:17.225 [2024-11-06 09:06:30.413978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=955406 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 955406 /var/tmp/bdevperf.sock 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 955406 ']' 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:17.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:17.225 { 00:30:17.225 "params": { 00:30:17.225 "name": "Nvme$subsystem", 00:30:17.225 "trtype": "$TEST_TRANSPORT", 00:30:17.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.225 "adrfam": "ipv4", 00:30:17.225 "trsvcid": "$NVMF_PORT", 00:30:17.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.225 "hdgst": ${hdgst:-false}, 00:30:17.225 "ddgst": ${ddgst:-false} 00:30:17.225 }, 00:30:17.225 "method": "bdev_nvme_attach_controller" 00:30:17.225 } 00:30:17.225 EOF 00:30:17.225 )") 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:17.225 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:17.225 "params": { 00:30:17.225 "name": "Nvme0", 00:30:17.225 "trtype": "tcp", 00:30:17.225 "traddr": "10.0.0.2", 00:30:17.225 "adrfam": "ipv4", 00:30:17.225 "trsvcid": "4420", 00:30:17.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:17.225 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:17.225 "hdgst": false, 00:30:17.225 "ddgst": false 00:30:17.225 }, 00:30:17.225 "method": "bdev_nvme_attach_controller" 00:30:17.225 }' 00:30:17.225 [2024-11-06 09:06:30.488479] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:30:17.225 [2024-11-06 09:06:30.488559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955406 ] 00:30:17.482 [2024-11-06 09:06:30.559858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.482 [2024-11-06 09:06:30.617855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.741 Running I/O for 10 seconds... 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:30:17.741 09:06:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=537 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 537 -ge 100 ']' 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.000 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:18.000 [2024-11-06 09:06:31.225770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.225990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.000 [2024-11-06 09:06:31.226348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.226489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c83a0 is same with the state(6) to be set 00:30:18.001 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.001 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:18.001 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:18.001 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:18.001 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:18.001 09:06:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:18.001 [2024-11-06 09:06:31.241518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.001 [2024-11-06 09:06:31.241564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.001 [2024-11-06 09:06:31.241598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.001 [2024-11-06 09:06:31.241626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:18.001 [2024-11-06 09:06:31.241653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1844a40 is same with the state(6) to be set 00:30:18.001 [2024-11-06 09:06:31.241762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.241784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.241823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.241867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.241897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.241926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.241956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.241971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.241985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.001 [2024-11-06 09:06:31.242605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.001 [2024-11-06 09:06:31.242620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.242977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.242992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.243732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:18.002 [2024-11-06 09:06:31.243745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:18.002 [2024-11-06 09:06:31.244945] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:18.002 task offset: 81664 on job bdev=Nvme0n1 fails 00:30:18.002 00:30:18.002 Latency(us) 00:30:18.002 [2024-11-06T08:06:31.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.002 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:18.002 Job: Nvme0n1 ended in about 0.40 seconds with error 00:30:18.002 Verification LBA range: start 0x0 length 0x400 00:30:18.002 Nvme0n1 : 0.40 1583.24 98.95 158.82 0.00 35686.54 2657.85 34175.81 00:30:18.002 [2024-11-06T08:06:31.291Z] =================================================================================================================== 00:30:18.002 [2024-11-06T08:06:31.292Z] Total : 1583.24 98.95 158.82 0.00 35686.54 2657.85 34175.81 00:30:18.003 [2024-11-06 09:06:31.246805] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:18.003 [2024-11-06 09:06:31.246854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1844a40 (9): Bad file descriptor 00:30:18.261 [2024-11-06 09:06:31.298212] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 955406 00:30:19.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (955406) - No such process 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:19.192 { 00:30:19.192 "params": { 00:30:19.192 "name": "Nvme$subsystem", 00:30:19.192 "trtype": "$TEST_TRANSPORT", 00:30:19.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:19.192 "adrfam": "ipv4", 00:30:19.192 "trsvcid": "$NVMF_PORT", 00:30:19.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:19.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:19.192 "hdgst": ${hdgst:-false}, 00:30:19.192 "ddgst": ${ddgst:-false} 00:30:19.192 }, 00:30:19.192 "method": "bdev_nvme_attach_controller" 00:30:19.192 } 00:30:19.192 EOF 00:30:19.192 )") 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:30:19.192 09:06:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:19.192 "params": { 00:30:19.192 "name": "Nvme0", 00:30:19.192 "trtype": "tcp", 00:30:19.192 "traddr": "10.0.0.2", 00:30:19.192 "adrfam": "ipv4", 00:30:19.192 "trsvcid": "4420", 00:30:19.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.192 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:19.192 "hdgst": false, 00:30:19.192 "ddgst": false 00:30:19.192 }, 00:30:19.192 "method": "bdev_nvme_attach_controller" 00:30:19.192 }' 00:30:19.192 [2024-11-06 09:06:32.289753] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:30:19.192 [2024-11-06 09:06:32.289862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid955679 ] 00:30:19.192 [2024-11-06 09:06:32.359324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.192 [2024-11-06 09:06:32.418079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.757 Running I/O for 1 seconds... 00:30:20.688 1664.00 IOPS, 104.00 MiB/s 00:30:20.688 Latency(us) 00:30:20.688 [2024-11-06T08:06:33.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.688 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.688 Verification LBA range: start 0x0 length 0x400 00:30:20.688 Nvme0n1 : 1.03 1684.84 105.30 0.00 0.00 37369.18 5364.24 33593.27 00:30:20.688 [2024-11-06T08:06:33.977Z] =================================================================================================================== 00:30:20.688 [2024-11-06T08:06:33.977Z] Total : 1684.84 105.30 0.00 0.00 37369.18 5364.24 33593.27 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:20.946 rmmod nvme_tcp 00:30:20.946 rmmod nvme_fabrics 00:30:20.946 rmmod nvme_keyring 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 955360 ']' 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 955360 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 955360 ']' 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 955360 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 955360 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 955360' 00:30:20.946 killing process with pid 955360 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 955360 00:30:20.946 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 955360 00:30:21.205 [2024-11-06 09:06:34.394021] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:21.205 09:06:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:23.741 00:30:23.741 real 0m8.948s 00:30:23.741 user 0m18.088s 00:30:23.741 sys 0m3.762s 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:23.741 ************************************ 00:30:23.741 END TEST nvmf_host_management 00:30:23.741 ************************************ 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:23.741 ************************************ 00:30:23.741 START TEST nvmf_lvol 00:30:23.741 ************************************ 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:23.741 * Looking for test storage... 00:30:23.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1689 -- # lcov --version 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:23.741 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:30:23.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.741 --rc genhtml_branch_coverage=1 00:30:23.742 --rc genhtml_function_coverage=1 00:30:23.742 --rc genhtml_legend=1 00:30:23.742 --rc geninfo_all_blocks=1 00:30:23.742 --rc geninfo_unexecuted_blocks=1 00:30:23.742 00:30:23.742 ' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:30:23.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.742 --rc genhtml_branch_coverage=1 00:30:23.742 --rc genhtml_function_coverage=1 00:30:23.742 --rc genhtml_legend=1 00:30:23.742 --rc geninfo_all_blocks=1 00:30:23.742 --rc geninfo_unexecuted_blocks=1 00:30:23.742 00:30:23.742 ' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:30:23.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.742 --rc genhtml_branch_coverage=1 00:30:23.742 --rc genhtml_function_coverage=1 00:30:23.742 --rc genhtml_legend=1 00:30:23.742 --rc geninfo_all_blocks=1 00:30:23.742 --rc geninfo_unexecuted_blocks=1 00:30:23.742 00:30:23.742 ' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:30:23.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:23.742 --rc genhtml_branch_coverage=1 00:30:23.742 --rc genhtml_function_coverage=1 00:30:23.742 --rc genhtml_legend=1 00:30:23.742 --rc geninfo_all_blocks=1 00:30:23.742 --rc geninfo_unexecuted_blocks=1 00:30:23.742 00:30:23.742 ' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:23.742 09:06:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:25.645 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:25.646 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:25.646 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:25.646 Found net devices under 0000:09:00.0: cvl_0_0 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:25.646 Found net devices under 0000:09:00.1: cvl_0_1 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:25.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:25.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:30:25.646 00:30:25.646 --- 10.0.0.2 ping statistics --- 00:30:25.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.646 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:25.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:25.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:30:25.646 00:30:25.646 --- 10.0.0.1 ping statistics --- 00:30:25.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:25.646 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:25.646 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=957888 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 957888 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 957888 ']' 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:25.647 09:06:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:25.906 [2024-11-06 09:06:38.952433] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:25.906 [2024-11-06 09:06:38.953449] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:30:25.906 [2024-11-06 09:06:38.953502] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.906 [2024-11-06 09:06:39.025534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:25.906 [2024-11-06 09:06:39.082167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.906 [2024-11-06 09:06:39.082222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.906 [2024-11-06 09:06:39.082236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.906 [2024-11-06 09:06:39.082247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.906 [2024-11-06 09:06:39.082257] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.906 [2024-11-06 09:06:39.083675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.906 [2024-11-06 09:06:39.083735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.906 [2024-11-06 09:06:39.083739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.906 [2024-11-06 09:06:39.169124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:25.906 [2024-11-06 09:06:39.169356] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:25.906 [2024-11-06 09:06:39.169379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:25.906 [2024-11-06 09:06:39.169614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:25.906 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:25.906 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:30:25.906 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:25.906 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:25.906 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:26.164 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.164 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:26.422 [2024-11-06 09:06:39.464433] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.422 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:26.679 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:26.679 09:06:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:26.936 09:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:26.936 09:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:27.193 09:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:27.450 09:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ec572139-565d-424c-8758-de8e491b0574 00:30:27.450 09:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ec572139-565d-424c-8758-de8e491b0574 lvol 20 00:30:27.708 09:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2fefc87a-6db1-4e31-99b4-7c01049844bb 00:30:27.708 09:06:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:27.966 09:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2fefc87a-6db1-4e31-99b4-7c01049844bb 00:30:28.222 09:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:28.481 [2024-11-06 09:06:41.736538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.481 09:06:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:28.739 09:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=958194 00:30:28.739 09:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:28.739 09:06:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:30.111 09:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2fefc87a-6db1-4e31-99b4-7c01049844bb MY_SNAPSHOT 00:30:30.111 09:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=790754af-5f15-4bc6-a14e-fd6918445e41 00:30:30.111 09:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2fefc87a-6db1-4e31-99b4-7c01049844bb 30 00:30:30.369 09:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 790754af-5f15-4bc6-a14e-fd6918445e41 MY_CLONE 00:30:30.934 09:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=91e574f2-1fd4-430d-a9f3-5c0e6cdbb12e 00:30:30.934 09:06:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 91e574f2-1fd4-430d-a9f3-5c0e6cdbb12e 00:30:31.498 09:06:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 958194 00:30:39.601 Initializing NVMe Controllers 00:30:39.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:39.601 Controller IO queue size 128, less than required. 00:30:39.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:39.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:39.601 Initialization complete. Launching workers. 00:30:39.601 ======================================================== 00:30:39.601 Latency(us) 00:30:39.601 Device Information : IOPS MiB/s Average min max 00:30:39.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10607.50 41.44 12069.75 1799.83 91257.99 00:30:39.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10466.10 40.88 12233.54 4263.19 75594.18 00:30:39.601 ======================================================== 00:30:39.601 Total : 21073.60 82.32 12151.10 1799.83 91257.99 00:30:39.601 00:30:39.601 09:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:39.601 09:06:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2fefc87a-6db1-4e31-99b4-7c01049844bb 00:30:39.858 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ec572139-565d-424c-8758-de8e491b0574 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:40.116 rmmod nvme_tcp 00:30:40.116 rmmod nvme_fabrics 00:30:40.116 rmmod nvme_keyring 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 957888 ']' 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 957888 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 957888 ']' 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 957888 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:40.116 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 957888 00:30:40.373 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:40.373 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:40.373 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 957888' 00:30:40.373 killing process with pid 957888 00:30:40.373 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 957888 00:30:40.374 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 957888 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.631 09:06:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.533 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:42.533 00:30:42.533 real 0m19.217s 00:30:42.533 user 0m56.745s 00:30:42.533 sys 0m7.641s 00:30:42.533 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:42.533 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:42.533 ************************************ 00:30:42.533 END TEST nvmf_lvol 00:30:42.533 ************************************ 00:30:42.533 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:42.533 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:42.533 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:42.533 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:42.533 ************************************ 00:30:42.533 START TEST nvmf_lvs_grow 00:30:42.533 ************************************ 00:30:42.533 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:42.792 * Looking for test storage... 00:30:42.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lcov --version 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:30:42.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.792 --rc genhtml_branch_coverage=1 00:30:42.792 --rc genhtml_function_coverage=1 00:30:42.792 --rc genhtml_legend=1 00:30:42.792 --rc geninfo_all_blocks=1 00:30:42.792 --rc geninfo_unexecuted_blocks=1 00:30:42.792 00:30:42.792 ' 00:30:42.792 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:30:42.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.792 --rc genhtml_branch_coverage=1 00:30:42.792 --rc genhtml_function_coverage=1 00:30:42.792 --rc genhtml_legend=1 00:30:42.793 --rc geninfo_all_blocks=1 00:30:42.793 --rc geninfo_unexecuted_blocks=1 00:30:42.793 00:30:42.793 ' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:30:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.793 --rc genhtml_branch_coverage=1 00:30:42.793 --rc genhtml_function_coverage=1 00:30:42.793 --rc genhtml_legend=1 00:30:42.793 --rc geninfo_all_blocks=1 00:30:42.793 --rc geninfo_unexecuted_blocks=1 00:30:42.793 00:30:42.793 ' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:30:42.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.793 --rc genhtml_branch_coverage=1 00:30:42.793 --rc genhtml_function_coverage=1 00:30:42.793 --rc genhtml_legend=1 00:30:42.793 --rc geninfo_all_blocks=1 00:30:42.793 --rc geninfo_unexecuted_blocks=1 00:30:42.793 00:30:42.793 ' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.793 09:06:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.321 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:45.321 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:45.322 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:45.322 Found net devices under 0000:09:00.0: cvl_0_0 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:45.322 Found net devices under 0000:09:00.1: cvl_0_1 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:30:45.322 00:30:45.322 --- 10.0.0.2 ping statistics --- 00:30:45.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.322 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:30:45.322 00:30:45.322 --- 10.0.0.1 ping statistics --- 00:30:45.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.322 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=961565 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 961565 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 961565 ']' 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.322 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.322 [2024-11-06 09:06:58.293111] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:45.322 [2024-11-06 09:06:58.294247] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:30:45.322 [2024-11-06 09:06:58.294317] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.322 [2024-11-06 09:06:58.368498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.322 [2024-11-06 09:06:58.427313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.322 [2024-11-06 09:06:58.427365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.323 [2024-11-06 09:06:58.427394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.323 [2024-11-06 09:06:58.427405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.323 [2024-11-06 09:06:58.427415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.323 [2024-11-06 09:06:58.428026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.323 [2024-11-06 09:06:58.525060] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:45.323 [2024-11-06 09:06:58.525369] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:45.323 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.323 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:30:45.323 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:45.323 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.323 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.323 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.323 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:45.581 [2024-11-06 09:06:58.828606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.581 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:45.581 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:45.581 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.581 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.839 ************************************ 00:30:45.839 START TEST lvs_grow_clean 00:30:45.839 ************************************ 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:45.839 09:06:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:46.097 09:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:46.097 09:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:46.355 09:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:30:46.355 09:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:30:46.355 09:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:46.612 09:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:46.612 09:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:46.612 09:06:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f lvol 150 00:30:46.870 09:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=2762122e-6b7a-4d30-82c0-0dbae251f78b 00:30:46.870 09:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:46.870 09:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:47.127 [2024-11-06 09:07:00.272536] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:47.127 [2024-11-06 09:07:00.272632] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:47.127 true 00:30:47.127 09:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:30:47.127 09:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:47.389 09:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:47.389 09:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:47.690 09:07:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2762122e-6b7a-4d30-82c0-0dbae251f78b 00:30:47.972 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:48.229 [2024-11-06 09:07:01.384805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.229 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=962005 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 962005 /var/tmp/bdevperf.sock 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 962005 ']' 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:48.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:48.487 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:48.487 [2024-11-06 09:07:01.713050] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:30:48.487 [2024-11-06 09:07:01.713150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962005 ] 00:30:48.745 [2024-11-06 09:07:01.779298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.745 [2024-11-06 09:07:01.837126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.745 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:48.745 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:30:48.745 09:07:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:49.310 Nvme0n1 00:30:49.310 09:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:49.568 [ 00:30:49.568 { 00:30:49.568 "name": "Nvme0n1", 00:30:49.568 "aliases": [ 00:30:49.568 "2762122e-6b7a-4d30-82c0-0dbae251f78b" 00:30:49.568 ], 00:30:49.568 "product_name": "NVMe disk", 00:30:49.568 "block_size": 4096, 00:30:49.568 "num_blocks": 38912, 00:30:49.568 "uuid": "2762122e-6b7a-4d30-82c0-0dbae251f78b", 00:30:49.568 "numa_id": 0, 00:30:49.568 "assigned_rate_limits": { 00:30:49.568 "rw_ios_per_sec": 0, 00:30:49.568 "rw_mbytes_per_sec": 0, 00:30:49.568 "r_mbytes_per_sec": 0, 00:30:49.568 "w_mbytes_per_sec": 0 00:30:49.568 }, 00:30:49.568 "claimed": false, 00:30:49.568 "zoned": false, 00:30:49.568 "supported_io_types": { 00:30:49.568 "read": true, 00:30:49.568 "write": true, 00:30:49.568 "unmap": true, 00:30:49.568 "flush": true, 00:30:49.568 "reset": true, 00:30:49.568 "nvme_admin": true, 00:30:49.568 "nvme_io": true, 00:30:49.568 "nvme_io_md": false, 00:30:49.568 "write_zeroes": true, 00:30:49.568 "zcopy": false, 00:30:49.568 "get_zone_info": false, 00:30:49.568 "zone_management": false, 00:30:49.568 "zone_append": false, 00:30:49.569 "compare": true, 00:30:49.569 "compare_and_write": true, 00:30:49.569 "abort": true, 00:30:49.569 "seek_hole": false, 00:30:49.569 "seek_data": false, 00:30:49.569 "copy": true, 00:30:49.569 "nvme_iov_md": false 00:30:49.569 }, 00:30:49.569 "memory_domains": [ 00:30:49.569 { 00:30:49.569 "dma_device_id": "system", 00:30:49.569 "dma_device_type": 1 00:30:49.569 } 00:30:49.569 ], 00:30:49.569 "driver_specific": { 00:30:49.569 "nvme": [ 00:30:49.569 { 00:30:49.569 "trid": { 00:30:49.569 "trtype": "TCP", 00:30:49.569 "adrfam": "IPv4", 00:30:49.569 "traddr": "10.0.0.2", 00:30:49.569 "trsvcid": "4420", 00:30:49.569 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:49.569 }, 00:30:49.569 "ctrlr_data": { 00:30:49.569 "cntlid": 1, 00:30:49.569 "vendor_id": "0x8086", 00:30:49.569 "model_number": "SPDK bdev Controller", 00:30:49.569 "serial_number": "SPDK0", 00:30:49.569 "firmware_revision": "25.01", 00:30:49.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.569 "oacs": { 00:30:49.569 "security": 0, 00:30:49.569 "format": 0, 00:30:49.569 "firmware": 0, 00:30:49.569 "ns_manage": 0 00:30:49.569 }, 00:30:49.569 "multi_ctrlr": true, 00:30:49.569 "ana_reporting": false 00:30:49.569 }, 00:30:49.569 "vs": { 00:30:49.569 "nvme_version": "1.3" 00:30:49.569 }, 00:30:49.569 "ns_data": { 00:30:49.569 "id": 1, 00:30:49.569 "can_share": true 00:30:49.569 } 00:30:49.569 } 00:30:49.569 ], 00:30:49.569 "mp_policy": "active_passive" 00:30:49.569 } 00:30:49.569 } 00:30:49.569 ] 00:30:49.569 09:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=962135 00:30:49.569 09:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:49.569 09:07:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:49.569 Running I/O for 10 seconds... 00:30:50.941 Latency(us) 00:30:50.941 [2024-11-06T08:07:04.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.941 Nvme0n1 : 1.00 14696.00 57.41 0.00 0.00 0.00 0.00 0.00 00:30:50.941 [2024-11-06T08:07:04.230Z] =================================================================================================================== 00:30:50.941 [2024-11-06T08:07:04.230Z] Total : 14696.00 57.41 0.00 0.00 0.00 0.00 0.00 00:30:50.941 00:30:51.506 09:07:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:30:51.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.506 Nvme0n1 : 2.00 14870.50 58.09 0.00 0.00 0.00 0.00 0.00 00:30:51.506 [2024-11-06T08:07:04.795Z] =================================================================================================================== 00:30:51.506 [2024-11-06T08:07:04.795Z] Total : 14870.50 58.09 0.00 0.00 0.00 0.00 0.00 00:30:51.506 00:30:51.763 true 00:30:51.763 09:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:30:51.763 09:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:52.021 09:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:52.021 09:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:52.021 09:07:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 962135 00:30:52.586 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:52.586 Nvme0n1 : 3.00 15002.67 58.60 0.00 0.00 0.00 0.00 0.00 00:30:52.586 [2024-11-06T08:07:05.875Z] =================================================================================================================== 00:30:52.586 [2024-11-06T08:07:05.875Z] Total : 15002.67 58.60 0.00 0.00 0.00 0.00 0.00 00:30:52.586 00:30:53.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:53.522 Nvme0n1 : 4.00 15099.25 58.98 0.00 0.00 0.00 0.00 0.00 00:30:53.522 [2024-11-06T08:07:06.811Z] =================================================================================================================== 00:30:53.522 [2024-11-06T08:07:06.811Z] Total : 15099.25 58.98 0.00 0.00 0.00 0.00 0.00 00:30:53.522 00:30:54.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:54.895 Nvme0n1 : 5.00 15196.60 59.36 0.00 0.00 0.00 0.00 0.00 00:30:54.895 [2024-11-06T08:07:08.184Z] =================================================================================================================== 00:30:54.895 [2024-11-06T08:07:08.184Z] Total : 15196.60 59.36 0.00 0.00 0.00 0.00 0.00 00:30:54.895 00:30:55.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:55.828 Nvme0n1 : 6.00 15260.50 59.61 0.00 0.00 0.00 0.00 0.00 00:30:55.828 [2024-11-06T08:07:09.117Z] =================================================================================================================== 00:30:55.828 [2024-11-06T08:07:09.117Z] Total : 15260.50 59.61 0.00 0.00 0.00 0.00 0.00 00:30:55.828 00:30:56.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:56.761 Nvme0n1 : 7.00 15299.00 59.76 0.00 0.00 0.00 0.00 0.00 00:30:56.761 [2024-11-06T08:07:10.050Z] =================================================================================================================== 00:30:56.761 [2024-11-06T08:07:10.050Z] Total : 15299.00 59.76 0.00 0.00 0.00 0.00 0.00 00:30:56.761 00:30:57.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:57.694 Nvme0n1 : 8.00 15310.50 59.81 0.00 0.00 0.00 0.00 0.00 00:30:57.694 [2024-11-06T08:07:10.983Z] =================================================================================================================== 00:30:57.694 [2024-11-06T08:07:10.983Z] Total : 15310.50 59.81 0.00 0.00 0.00 0.00 0.00 00:30:57.694 00:30:58.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:58.628 Nvme0n1 : 9.00 15354.22 59.98 0.00 0.00 0.00 0.00 0.00 00:30:58.628 [2024-11-06T08:07:11.917Z] =================================================================================================================== 00:30:58.628 [2024-11-06T08:07:11.917Z] Total : 15354.22 59.98 0.00 0.00 0.00 0.00 0.00 00:30:58.628 00:30:59.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.559 Nvme0n1 : 10.00 15401.60 60.16 0.00 0.00 0.00 0.00 0.00 00:30:59.559 [2024-11-06T08:07:12.849Z] =================================================================================================================== 00:30:59.560 [2024-11-06T08:07:12.849Z] Total : 15401.60 60.16 0.00 0.00 0.00 0.00 0.00 00:30:59.560 00:30:59.560 00:30:59.560 Latency(us) 00:30:59.560 [2024-11-06T08:07:12.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.560 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.560 Nvme0n1 : 10.01 15404.82 60.18 0.00 0.00 8304.22 4393.34 18544.26 00:30:59.560 [2024-11-06T08:07:12.849Z] =================================================================================================================== 00:30:59.560 [2024-11-06T08:07:12.849Z] Total : 15404.82 60.18 0.00 0.00 8304.22 4393.34 18544.26 00:30:59.560 { 00:30:59.560 "results": [ 00:30:59.560 { 00:30:59.560 "job": "Nvme0n1", 00:30:59.560 "core_mask": "0x2", 00:30:59.560 "workload": "randwrite", 00:30:59.560 "status": "finished", 00:30:59.560 "queue_depth": 128, 00:30:59.560 "io_size": 4096, 00:30:59.560 "runtime": 10.006219, 00:30:59.560 "iops": 15404.819742602076, 00:30:59.560 "mibps": 60.17507711953936, 00:30:59.560 "io_failed": 0, 00:30:59.560 "io_timeout": 0, 00:30:59.560 "avg_latency_us": 8304.216728715428, 00:30:59.560 "min_latency_us": 4393.339259259259, 00:30:59.560 "max_latency_us": 18544.26074074074 00:30:59.560 } 00:30:59.560 ], 00:30:59.560 "core_count": 1 00:30:59.560 } 00:30:59.560 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 962005 00:30:59.560 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 962005 ']' 00:30:59.560 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 962005 00:30:59.560 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:30:59.560 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.560 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 962005 00:30:59.818 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:59.818 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:59.818 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 962005' 00:30:59.818 killing process with pid 962005 00:30:59.818 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 962005 00:30:59.818 Received shutdown signal, test time was about 10.000000 seconds 00:30:59.818 00:30:59.818 Latency(us) 00:30:59.818 [2024-11-06T08:07:13.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:59.818 [2024-11-06T08:07:13.107Z] =================================================================================================================== 00:30:59.818 [2024-11-06T08:07:13.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:59.818 09:07:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 962005 00:30:59.818 09:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:00.383 09:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:00.383 09:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:31:00.383 09:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:00.641 09:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:00.641 09:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:00.641 09:07:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:00.899 [2024-11-06 09:07:14.188606] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:01.157 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:31:01.416 request: 00:31:01.416 { 00:31:01.416 "uuid": "fcf0e25c-1ddf-487c-b3db-e8f83e1a224f", 00:31:01.416 "method": "bdev_lvol_get_lvstores", 00:31:01.416 "req_id": 1 00:31:01.416 } 00:31:01.416 Got JSON-RPC error response 00:31:01.416 response: 00:31:01.416 { 00:31:01.416 "code": -19, 00:31:01.416 "message": "No such device" 00:31:01.416 } 00:31:01.416 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:31:01.416 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:01.416 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:01.416 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:01.416 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:01.673 aio_bdev 00:31:01.673 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2762122e-6b7a-4d30-82c0-0dbae251f78b 00:31:01.673 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=2762122e-6b7a-4d30-82c0-0dbae251f78b 00:31:01.673 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:01.673 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:31:01.673 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:01.673 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:01.673 09:07:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:01.930 09:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2762122e-6b7a-4d30-82c0-0dbae251f78b -t 2000 00:31:02.188 [ 00:31:02.188 { 00:31:02.188 "name": "2762122e-6b7a-4d30-82c0-0dbae251f78b", 00:31:02.188 "aliases": [ 00:31:02.188 "lvs/lvol" 00:31:02.188 ], 00:31:02.188 "product_name": "Logical Volume", 00:31:02.188 "block_size": 4096, 00:31:02.188 "num_blocks": 38912, 00:31:02.188 "uuid": "2762122e-6b7a-4d30-82c0-0dbae251f78b", 00:31:02.188 "assigned_rate_limits": { 00:31:02.188 "rw_ios_per_sec": 0, 00:31:02.188 "rw_mbytes_per_sec": 0, 00:31:02.188 "r_mbytes_per_sec": 0, 00:31:02.188 "w_mbytes_per_sec": 0 00:31:02.188 }, 00:31:02.188 "claimed": false, 00:31:02.188 "zoned": false, 00:31:02.188 "supported_io_types": { 00:31:02.188 "read": true, 00:31:02.188 "write": true, 00:31:02.188 "unmap": true, 00:31:02.188 "flush": false, 00:31:02.188 "reset": true, 00:31:02.188 "nvme_admin": false, 00:31:02.188 "nvme_io": false, 00:31:02.188 "nvme_io_md": false, 00:31:02.188 "write_zeroes": true, 00:31:02.188 "zcopy": false, 00:31:02.188 "get_zone_info": false, 00:31:02.188 "zone_management": false, 00:31:02.188 "zone_append": false, 00:31:02.188 "compare": false, 00:31:02.188 "compare_and_write": false, 00:31:02.188 "abort": false, 00:31:02.188 "seek_hole": true, 00:31:02.188 "seek_data": true, 00:31:02.188 "copy": false, 00:31:02.188 "nvme_iov_md": false 00:31:02.188 }, 00:31:02.188 "driver_specific": { 00:31:02.188 "lvol": { 00:31:02.188 "lvol_store_uuid": "fcf0e25c-1ddf-487c-b3db-e8f83e1a224f", 00:31:02.188 "base_bdev": "aio_bdev", 00:31:02.188 "thin_provision": false, 00:31:02.188 "num_allocated_clusters": 38, 00:31:02.188 "snapshot": false, 00:31:02.188 "clone": false, 00:31:02.188 "esnap_clone": false 00:31:02.188 } 00:31:02.188 } 00:31:02.188 } 00:31:02.188 ] 00:31:02.188 09:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:31:02.188 09:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:31:02.188 09:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:02.446 09:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:02.446 09:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:31:02.446 09:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:02.705 09:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:02.705 09:07:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2762122e-6b7a-4d30-82c0-0dbae251f78b 00:31:02.962 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fcf0e25c-1ddf-487c-b3db-e8f83e1a224f 00:31:03.220 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:03.479 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:03.479 00:31:03.479 real 0m17.865s 00:31:03.479 user 0m17.530s 00:31:03.479 sys 0m1.809s 00:31:03.479 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.479 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:03.479 ************************************ 00:31:03.479 END TEST lvs_grow_clean 00:31:03.479 ************************************ 00:31:03.479 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:03.479 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:03.479 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.479 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:03.737 ************************************ 00:31:03.737 START TEST lvs_grow_dirty 00:31:03.737 ************************************ 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:03.737 09:07:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:03.996 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:03.996 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:04.255 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:04.255 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:04.255 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:04.514 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:04.514 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:04.514 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 lvol 150 00:31:04.772 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=460ff833-d60b-4071-b74b-d2c0e43142bc 00:31:04.772 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:04.772 09:07:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:05.030 [2024-11-06 09:07:18.184498] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:05.030 [2024-11-06 09:07:18.184584] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:05.030 true 00:31:05.030 09:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:05.030 09:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:05.288 09:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:05.288 09:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:05.546 09:07:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 460ff833-d60b-4071-b74b-d2c0e43142bc 00:31:05.804 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.062 [2024-11-06 09:07:19.284787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.062 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=964060 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 964060 /var/tmp/bdevperf.sock 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 964060 ']' 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:06.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:06.320 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:06.580 [2024-11-06 09:07:19.626982] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:06.580 [2024-11-06 09:07:19.627059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid964060 ] 00:31:06.580 [2024-11-06 09:07:19.693639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.580 [2024-11-06 09:07:19.754346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.839 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:06.839 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:06.839 09:07:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:07.096 Nvme0n1 00:31:07.096 09:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:07.353 [ 00:31:07.353 { 00:31:07.353 "name": "Nvme0n1", 00:31:07.353 "aliases": [ 00:31:07.353 "460ff833-d60b-4071-b74b-d2c0e43142bc" 00:31:07.353 ], 00:31:07.353 "product_name": "NVMe disk", 00:31:07.353 "block_size": 4096, 00:31:07.353 "num_blocks": 38912, 00:31:07.353 "uuid": "460ff833-d60b-4071-b74b-d2c0e43142bc", 00:31:07.353 "numa_id": 0, 00:31:07.353 "assigned_rate_limits": { 00:31:07.353 "rw_ios_per_sec": 0, 00:31:07.353 "rw_mbytes_per_sec": 0, 00:31:07.353 "r_mbytes_per_sec": 0, 00:31:07.353 "w_mbytes_per_sec": 0 00:31:07.353 }, 00:31:07.353 "claimed": false, 00:31:07.353 "zoned": false, 00:31:07.353 "supported_io_types": { 00:31:07.353 "read": true, 00:31:07.353 "write": true, 00:31:07.353 "unmap": true, 00:31:07.353 "flush": true, 00:31:07.353 "reset": true, 00:31:07.353 "nvme_admin": true, 00:31:07.353 "nvme_io": true, 00:31:07.353 "nvme_io_md": false, 00:31:07.353 "write_zeroes": true, 00:31:07.353 "zcopy": false, 00:31:07.353 "get_zone_info": false, 00:31:07.353 "zone_management": false, 00:31:07.353 "zone_append": false, 00:31:07.353 "compare": true, 00:31:07.353 "compare_and_write": true, 00:31:07.353 "abort": true, 00:31:07.353 "seek_hole": false, 00:31:07.353 "seek_data": false, 00:31:07.353 "copy": true, 00:31:07.353 "nvme_iov_md": false 00:31:07.353 }, 00:31:07.353 "memory_domains": [ 00:31:07.353 { 00:31:07.353 "dma_device_id": "system", 00:31:07.353 "dma_device_type": 1 00:31:07.353 } 00:31:07.353 ], 00:31:07.353 "driver_specific": { 00:31:07.353 "nvme": [ 00:31:07.353 { 00:31:07.353 "trid": { 00:31:07.353 "trtype": "TCP", 00:31:07.353 "adrfam": "IPv4", 00:31:07.353 "traddr": "10.0.0.2", 00:31:07.353 "trsvcid": "4420", 00:31:07.353 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:07.353 }, 00:31:07.353 "ctrlr_data": { 00:31:07.353 "cntlid": 1, 00:31:07.353 "vendor_id": "0x8086", 00:31:07.353 "model_number": "SPDK bdev Controller", 00:31:07.353 "serial_number": "SPDK0", 00:31:07.353 "firmware_revision": "25.01", 00:31:07.354 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:07.354 "oacs": { 00:31:07.354 "security": 0, 00:31:07.354 "format": 0, 00:31:07.354 "firmware": 0, 00:31:07.354 "ns_manage": 0 00:31:07.354 }, 00:31:07.354 "multi_ctrlr": true, 00:31:07.354 "ana_reporting": false 00:31:07.354 }, 00:31:07.354 "vs": { 00:31:07.354 "nvme_version": "1.3" 00:31:07.354 }, 00:31:07.354 "ns_data": { 00:31:07.354 "id": 1, 00:31:07.354 "can_share": true 00:31:07.354 } 00:31:07.354 } 00:31:07.354 ], 00:31:07.354 "mp_policy": "active_passive" 00:31:07.354 } 00:31:07.354 } 00:31:07.354 ] 00:31:07.611 09:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=964184 00:31:07.611 09:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:07.611 09:07:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:07.611 Running I/O for 10 seconds... 00:31:08.541 Latency(us) 00:31:08.541 [2024-11-06T08:07:21.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:08.541 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.541 Nvme0n1 : 1.00 12413.00 48.49 0.00 0.00 0.00 0.00 0.00 00:31:08.541 [2024-11-06T08:07:21.830Z] =================================================================================================================== 00:31:08.541 [2024-11-06T08:07:21.830Z] Total : 12413.00 48.49 0.00 0.00 0.00 0.00 0.00 00:31:08.541 00:31:09.475 09:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:09.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.475 Nvme0n1 : 2.00 12494.50 48.81 0.00 0.00 0.00 0.00 0.00 00:31:09.475 [2024-11-06T08:07:22.764Z] =================================================================================================================== 00:31:09.475 [2024-11-06T08:07:22.764Z] Total : 12494.50 48.81 0.00 0.00 0.00 0.00 0.00 00:31:09.475 00:31:09.732 true 00:31:09.732 09:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:09.732 09:07:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:09.990 09:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:09.990 09:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:09.990 09:07:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 964184 00:31:10.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:10.556 Nvme0n1 : 3.00 12551.00 49.03 0.00 0.00 0.00 0.00 0.00 00:31:10.556 [2024-11-06T08:07:23.845Z] =================================================================================================================== 00:31:10.556 [2024-11-06T08:07:23.845Z] Total : 12551.00 49.03 0.00 0.00 0.00 0.00 0.00 00:31:10.556 00:31:11.490 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.490 Nvme0n1 : 4.00 12607.25 49.25 0.00 0.00 0.00 0.00 0.00 00:31:11.490 [2024-11-06T08:07:24.779Z] =================================================================================================================== 00:31:11.490 [2024-11-06T08:07:24.779Z] Total : 12607.25 49.25 0.00 0.00 0.00 0.00 0.00 00:31:11.490 00:31:12.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:12.864 Nvme0n1 : 5.00 12655.40 49.44 0.00 0.00 0.00 0.00 0.00 00:31:12.864 [2024-11-06T08:07:26.153Z] =================================================================================================================== 00:31:12.864 [2024-11-06T08:07:26.153Z] Total : 12655.40 49.44 0.00 0.00 0.00 0.00 0.00 00:31:12.864 00:31:13.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:13.799 Nvme0n1 : 6.00 12684.83 49.55 0.00 0.00 0.00 0.00 0.00 00:31:13.799 [2024-11-06T08:07:27.088Z] =================================================================================================================== 00:31:13.799 [2024-11-06T08:07:27.088Z] Total : 12684.83 49.55 0.00 0.00 0.00 0.00 0.00 00:31:13.799 00:31:14.733 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:14.733 Nvme0n1 : 7.00 12716.14 49.67 0.00 0.00 0.00 0.00 0.00 00:31:14.733 [2024-11-06T08:07:28.022Z] =================================================================================================================== 00:31:14.733 [2024-11-06T08:07:28.022Z] Total : 12716.14 49.67 0.00 0.00 0.00 0.00 0.00 00:31:14.733 00:31:15.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:15.667 Nvme0n1 : 8.00 12741.62 49.77 0.00 0.00 0.00 0.00 0.00 00:31:15.667 [2024-11-06T08:07:28.956Z] =================================================================================================================== 00:31:15.667 [2024-11-06T08:07:28.956Z] Total : 12741.62 49.77 0.00 0.00 0.00 0.00 0.00 00:31:15.667 00:31:16.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:16.599 Nvme0n1 : 9.00 12777.44 49.91 0.00 0.00 0.00 0.00 0.00 00:31:16.599 [2024-11-06T08:07:29.888Z] =================================================================================================================== 00:31:16.599 [2024-11-06T08:07:29.888Z] Total : 12777.44 49.91 0.00 0.00 0.00 0.00 0.00 00:31:16.599 00:31:17.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:17.532 Nvme0n1 : 10.00 12804.50 50.02 0.00 0.00 0.00 0.00 0.00 00:31:17.532 [2024-11-06T08:07:30.821Z] =================================================================================================================== 00:31:17.532 [2024-11-06T08:07:30.821Z] Total : 12804.50 50.02 0.00 0.00 0.00 0.00 0.00 00:31:17.532 00:31:17.532 00:31:17.532 Latency(us) 00:31:17.532 [2024-11-06T08:07:30.821Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:17.532 Nvme0n1 : 10.01 12805.54 50.02 0.00 0.00 9987.01 2730.67 12815.93 00:31:17.532 [2024-11-06T08:07:30.821Z] =================================================================================================================== 00:31:17.532 [2024-11-06T08:07:30.821Z] Total : 12805.54 50.02 0.00 0.00 9987.01 2730.67 12815.93 00:31:17.532 { 00:31:17.532 "results": [ 00:31:17.532 { 00:31:17.532 "job": "Nvme0n1", 00:31:17.532 "core_mask": "0x2", 00:31:17.532 "workload": "randwrite", 00:31:17.532 "status": "finished", 00:31:17.532 "queue_depth": 128, 00:31:17.532 "io_size": 4096, 00:31:17.532 "runtime": 10.009186, 00:31:17.532 "iops": 12805.536833864413, 00:31:17.532 "mibps": 50.02162825728286, 00:31:17.532 "io_failed": 0, 00:31:17.532 "io_timeout": 0, 00:31:17.532 "avg_latency_us": 9987.0084819967, 00:31:17.532 "min_latency_us": 2730.6666666666665, 00:31:17.532 "max_latency_us": 12815.92888888889 00:31:17.532 } 00:31:17.532 ], 00:31:17.532 "core_count": 1 00:31:17.532 } 00:31:17.532 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 964060 00:31:17.532 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 964060 ']' 00:31:17.532 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 964060 00:31:17.532 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:31:17.532 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:17.532 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 964060 00:31:17.793 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:17.793 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:17.793 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 964060' 00:31:17.793 killing process with pid 964060 00:31:17.793 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 964060 00:31:17.793 Received shutdown signal, test time was about 10.000000 seconds 00:31:17.793 00:31:17.793 Latency(us) 00:31:17.793 [2024-11-06T08:07:31.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:17.793 [2024-11-06T08:07:31.082Z] =================================================================================================================== 00:31:17.793 [2024-11-06T08:07:31.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:17.793 09:07:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 964060 00:31:17.793 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:18.084 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:18.368 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:18.368 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 961565 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 961565 00:31:18.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 961565 Killed "${NVMF_APP[@]}" "$@" 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=965508 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 965508 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 965508 ']' 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:18.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:18.933 09:07:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:18.933 [2024-11-06 09:07:32.009705] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:18.933 [2024-11-06 09:07:32.010799] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:18.933 [2024-11-06 09:07:32.010903] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:18.933 [2024-11-06 09:07:32.082568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.933 [2024-11-06 09:07:32.139594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:18.933 [2024-11-06 09:07:32.139648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:18.933 [2024-11-06 09:07:32.139677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:18.933 [2024-11-06 09:07:32.139688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:18.933 [2024-11-06 09:07:32.139697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:18.934 [2024-11-06 09:07:32.140257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.191 [2024-11-06 09:07:32.227705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:19.192 [2024-11-06 09:07:32.228031] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:19.192 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:19.192 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:31:19.192 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:19.192 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:19.192 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:19.192 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:19.192 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:19.450 [2024-11-06 09:07:32.575197] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:19.450 [2024-11-06 09:07:32.575347] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:19.450 [2024-11-06 09:07:32.575400] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:19.450 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:19.450 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 460ff833-d60b-4071-b74b-d2c0e43142bc 00:31:19.450 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=460ff833-d60b-4071-b74b-d2c0e43142bc 00:31:19.450 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:19.450 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:19.450 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:19.450 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:19.450 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:19.708 09:07:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 460ff833-d60b-4071-b74b-d2c0e43142bc -t 2000 00:31:19.966 [ 00:31:19.966 { 00:31:19.966 "name": "460ff833-d60b-4071-b74b-d2c0e43142bc", 00:31:19.966 "aliases": [ 00:31:19.966 "lvs/lvol" 00:31:19.966 ], 00:31:19.966 "product_name": "Logical Volume", 00:31:19.966 "block_size": 4096, 00:31:19.967 "num_blocks": 38912, 00:31:19.967 "uuid": "460ff833-d60b-4071-b74b-d2c0e43142bc", 00:31:19.967 "assigned_rate_limits": { 00:31:19.967 "rw_ios_per_sec": 0, 00:31:19.967 "rw_mbytes_per_sec": 0, 00:31:19.967 "r_mbytes_per_sec": 0, 00:31:19.967 "w_mbytes_per_sec": 0 00:31:19.967 }, 00:31:19.967 "claimed": false, 00:31:19.967 "zoned": false, 00:31:19.967 "supported_io_types": { 00:31:19.967 "read": true, 00:31:19.967 "write": true, 00:31:19.967 "unmap": true, 00:31:19.967 "flush": false, 00:31:19.967 "reset": true, 00:31:19.967 "nvme_admin": false, 00:31:19.967 "nvme_io": false, 00:31:19.967 "nvme_io_md": false, 00:31:19.967 "write_zeroes": true, 00:31:19.967 "zcopy": false, 00:31:19.967 "get_zone_info": false, 00:31:19.967 "zone_management": false, 00:31:19.967 "zone_append": false, 00:31:19.967 "compare": false, 00:31:19.967 "compare_and_write": false, 00:31:19.967 "abort": false, 00:31:19.967 "seek_hole": true, 00:31:19.967 "seek_data": true, 00:31:19.967 "copy": false, 00:31:19.967 "nvme_iov_md": false 00:31:19.967 }, 00:31:19.967 "driver_specific": { 00:31:19.967 "lvol": { 00:31:19.967 "lvol_store_uuid": "235c6294-0c57-467a-bfc8-1fe8f2fb7919", 00:31:19.967 "base_bdev": "aio_bdev", 00:31:19.967 "thin_provision": false, 00:31:19.967 "num_allocated_clusters": 38, 00:31:19.967 "snapshot": false, 00:31:19.967 "clone": false, 00:31:19.967 "esnap_clone": false 00:31:19.967 } 00:31:19.967 } 00:31:19.967 } 00:31:19.967 ] 00:31:19.967 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:19.967 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:19.967 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:20.228 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:20.228 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:20.228 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:20.486 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:20.486 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:20.745 [2024-11-06 09:07:33.948771] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:20.745 09:07:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:21.003 request: 00:31:21.003 { 00:31:21.003 "uuid": "235c6294-0c57-467a-bfc8-1fe8f2fb7919", 00:31:21.003 "method": "bdev_lvol_get_lvstores", 00:31:21.003 "req_id": 1 00:31:21.003 } 00:31:21.003 Got JSON-RPC error response 00:31:21.003 response: 00:31:21.003 { 00:31:21.003 "code": -19, 00:31:21.003 "message": "No such device" 00:31:21.003 } 00:31:21.003 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:31:21.003 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:21.003 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:21.003 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:21.003 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:21.261 aio_bdev 00:31:21.261 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 460ff833-d60b-4071-b74b-d2c0e43142bc 00:31:21.261 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=460ff833-d60b-4071-b74b-d2c0e43142bc 00:31:21.261 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:21.261 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:31:21.261 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:21.261 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:21.261 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:21.828 09:07:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 460ff833-d60b-4071-b74b-d2c0e43142bc -t 2000 00:31:21.828 [ 00:31:21.828 { 00:31:21.828 "name": "460ff833-d60b-4071-b74b-d2c0e43142bc", 00:31:21.828 "aliases": [ 00:31:21.828 "lvs/lvol" 00:31:21.828 ], 00:31:21.828 "product_name": "Logical Volume", 00:31:21.828 "block_size": 4096, 00:31:21.828 "num_blocks": 38912, 00:31:21.828 "uuid": "460ff833-d60b-4071-b74b-d2c0e43142bc", 00:31:21.828 "assigned_rate_limits": { 00:31:21.828 "rw_ios_per_sec": 0, 00:31:21.828 "rw_mbytes_per_sec": 0, 00:31:21.828 "r_mbytes_per_sec": 0, 00:31:21.828 "w_mbytes_per_sec": 0 00:31:21.828 }, 00:31:21.828 "claimed": false, 00:31:21.828 "zoned": false, 00:31:21.828 "supported_io_types": { 00:31:21.828 "read": true, 00:31:21.828 "write": true, 00:31:21.828 "unmap": true, 00:31:21.828 "flush": false, 00:31:21.828 "reset": true, 00:31:21.828 "nvme_admin": false, 00:31:21.828 "nvme_io": false, 00:31:21.828 "nvme_io_md": false, 00:31:21.828 "write_zeroes": true, 00:31:21.828 "zcopy": false, 00:31:21.828 "get_zone_info": false, 00:31:21.828 "zone_management": false, 00:31:21.828 "zone_append": false, 00:31:21.828 "compare": false, 00:31:21.828 "compare_and_write": false, 00:31:21.828 "abort": false, 00:31:21.828 "seek_hole": true, 00:31:21.828 "seek_data": true, 00:31:21.828 "copy": false, 00:31:21.828 "nvme_iov_md": false 00:31:21.828 }, 00:31:21.828 "driver_specific": { 00:31:21.828 "lvol": { 00:31:21.828 "lvol_store_uuid": "235c6294-0c57-467a-bfc8-1fe8f2fb7919", 00:31:21.828 "base_bdev": "aio_bdev", 00:31:21.828 "thin_provision": false, 00:31:21.828 "num_allocated_clusters": 38, 00:31:21.828 "snapshot": false, 00:31:21.828 "clone": false, 00:31:21.828 "esnap_clone": false 00:31:21.828 } 00:31:21.828 } 00:31:21.828 } 00:31:21.828 ] 00:31:21.828 09:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:31:21.828 09:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:21.828 09:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:22.086 09:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:22.086 09:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:22.086 09:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:22.344 09:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:22.344 09:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 460ff833-d60b-4071-b74b-d2c0e43142bc 00:31:22.603 09:07:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 235c6294-0c57-467a-bfc8-1fe8f2fb7919 00:31:23.169 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:23.169 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:23.426 00:31:23.426 real 0m19.681s 00:31:23.426 user 0m35.699s 00:31:23.426 sys 0m5.205s 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:23.426 ************************************ 00:31:23.426 END TEST lvs_grow_dirty 00:31:23.426 ************************************ 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:23.426 nvmf_trace.0 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:23.426 rmmod nvme_tcp 00:31:23.426 rmmod nvme_fabrics 00:31:23.426 rmmod nvme_keyring 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 965508 ']' 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 965508 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 965508 ']' 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 965508 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 965508 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 965508' 00:31:23.426 killing process with pid 965508 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 965508 00:31:23.426 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 965508 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.685 09:07:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.222 09:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:26.222 00:31:26.222 real 0m43.123s 00:31:26.222 user 0m55.074s 00:31:26.222 sys 0m9.023s 00:31:26.222 09:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:26.222 09:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:26.222 ************************************ 00:31:26.222 END TEST nvmf_lvs_grow 00:31:26.222 ************************************ 00:31:26.222 09:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:26.222 09:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:26.222 09:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:26.222 09:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:26.222 ************************************ 00:31:26.222 START TEST nvmf_bdev_io_wait 00:31:26.222 ************************************ 00:31:26.222 09:07:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:26.222 * Looking for test storage... 00:31:26.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lcov --version 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:26.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.222 --rc genhtml_branch_coverage=1 00:31:26.222 --rc genhtml_function_coverage=1 00:31:26.222 --rc genhtml_legend=1 00:31:26.222 --rc geninfo_all_blocks=1 00:31:26.222 --rc geninfo_unexecuted_blocks=1 00:31:26.222 00:31:26.222 ' 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:26.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.222 --rc genhtml_branch_coverage=1 00:31:26.222 --rc genhtml_function_coverage=1 00:31:26.222 --rc genhtml_legend=1 00:31:26.222 --rc geninfo_all_blocks=1 00:31:26.222 --rc geninfo_unexecuted_blocks=1 00:31:26.222 00:31:26.222 ' 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:26.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.222 --rc genhtml_branch_coverage=1 00:31:26.222 --rc genhtml_function_coverage=1 00:31:26.222 --rc genhtml_legend=1 00:31:26.222 --rc geninfo_all_blocks=1 00:31:26.222 --rc geninfo_unexecuted_blocks=1 00:31:26.222 00:31:26.222 ' 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:26.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:26.222 --rc genhtml_branch_coverage=1 00:31:26.222 --rc genhtml_function_coverage=1 00:31:26.222 --rc genhtml_legend=1 00:31:26.222 --rc geninfo_all_blocks=1 00:31:26.222 --rc geninfo_unexecuted_blocks=1 00:31:26.222 00:31:26.222 ' 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:26.222 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:26.223 09:07:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:28.125 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.125 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:28.126 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:28.126 Found net devices under 0000:09:00.0: cvl_0_0 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:28.126 Found net devices under 0000:09:00.1: cvl_0_1 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:28.126 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:28.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:28.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:31:28.385 00:31:28.385 --- 10.0.0.2 ping statistics --- 00:31:28.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.385 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:28.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:28.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:31:28.385 00:31:28.385 --- 10.0.0.1 ping statistics --- 00:31:28.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:28.385 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=968148 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 968148 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 968148 ']' 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:28.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:28.385 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.385 [2024-11-06 09:07:41.547044] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:28.385 [2024-11-06 09:07:41.548174] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:28.385 [2024-11-06 09:07:41.548242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:28.385 [2024-11-06 09:07:41.617180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:28.643 [2024-11-06 09:07:41.677706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:28.643 [2024-11-06 09:07:41.677767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:28.643 [2024-11-06 09:07:41.677781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:28.643 [2024-11-06 09:07:41.677792] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:28.643 [2024-11-06 09:07:41.677801] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:28.643 [2024-11-06 09:07:41.679397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.643 [2024-11-06 09:07:41.679463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:28.643 [2024-11-06 09:07:41.679528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:28.643 [2024-11-06 09:07:41.679531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:28.643 [2024-11-06 09:07:41.679952] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.643 [2024-11-06 09:07:41.860940] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:28.643 [2024-11-06 09:07:41.861152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:28.643 [2024-11-06 09:07:41.861989] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:28.643 [2024-11-06 09:07:41.862710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.643 [2024-11-06 09:07:41.868153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.643 Malloc0 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:28.643 [2024-11-06 09:07:41.920326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=968180 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=968182 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:28.643 { 00:31:28.643 "params": { 00:31:28.643 "name": "Nvme$subsystem", 00:31:28.643 "trtype": "$TEST_TRANSPORT", 00:31:28.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.643 "adrfam": "ipv4", 00:31:28.643 "trsvcid": "$NVMF_PORT", 00:31:28.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.643 "hdgst": ${hdgst:-false}, 00:31:28.643 "ddgst": ${ddgst:-false} 00:31:28.643 }, 00:31:28.643 "method": "bdev_nvme_attach_controller" 00:31:28.643 } 00:31:28.643 EOF 00:31:28.643 )") 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=968184 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:28.643 { 00:31:28.643 "params": { 00:31:28.643 "name": "Nvme$subsystem", 00:31:28.643 "trtype": "$TEST_TRANSPORT", 00:31:28.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.643 "adrfam": "ipv4", 00:31:28.643 "trsvcid": "$NVMF_PORT", 00:31:28.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.643 "hdgst": ${hdgst:-false}, 00:31:28.643 "ddgst": ${ddgst:-false} 00:31:28.643 }, 00:31:28.643 "method": "bdev_nvme_attach_controller" 00:31:28.643 } 00:31:28.643 EOF 00:31:28.643 )") 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=968187 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:28.643 { 00:31:28.643 "params": { 00:31:28.643 "name": "Nvme$subsystem", 00:31:28.643 "trtype": "$TEST_TRANSPORT", 00:31:28.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.643 "adrfam": "ipv4", 00:31:28.643 "trsvcid": "$NVMF_PORT", 00:31:28.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.643 "hdgst": ${hdgst:-false}, 00:31:28.643 "ddgst": ${ddgst:-false} 00:31:28.643 }, 00:31:28.643 "method": "bdev_nvme_attach_controller" 00:31:28.643 } 00:31:28.643 EOF 00:31:28.643 )") 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:28.643 { 00:31:28.643 "params": { 00:31:28.643 "name": "Nvme$subsystem", 00:31:28.643 "trtype": "$TEST_TRANSPORT", 00:31:28.643 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.643 "adrfam": "ipv4", 00:31:28.643 "trsvcid": "$NVMF_PORT", 00:31:28.643 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.643 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.643 "hdgst": ${hdgst:-false}, 00:31:28.643 "ddgst": ${ddgst:-false} 00:31:28.643 }, 00:31:28.643 "method": "bdev_nvme_attach_controller" 00:31:28.643 } 00:31:28.643 EOF 00:31:28.643 )") 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 968180 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:28.643 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:28.643 "params": { 00:31:28.643 "name": "Nvme1", 00:31:28.643 "trtype": "tcp", 00:31:28.643 "traddr": "10.0.0.2", 00:31:28.643 "adrfam": "ipv4", 00:31:28.643 "trsvcid": "4420", 00:31:28.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.643 "hdgst": false, 00:31:28.643 "ddgst": false 00:31:28.643 }, 00:31:28.643 "method": "bdev_nvme_attach_controller" 00:31:28.643 }' 00:31:28.901 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:31:28.901 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:28.901 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:28.901 "params": { 00:31:28.901 "name": "Nvme1", 00:31:28.901 "trtype": "tcp", 00:31:28.901 "traddr": "10.0.0.2", 00:31:28.901 "adrfam": "ipv4", 00:31:28.901 "trsvcid": "4420", 00:31:28.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.901 "hdgst": false, 00:31:28.901 "ddgst": false 00:31:28.901 }, 00:31:28.901 "method": "bdev_nvme_attach_controller" 00:31:28.901 }' 00:31:28.901 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:28.901 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:28.901 "params": { 00:31:28.901 "name": "Nvme1", 00:31:28.901 "trtype": "tcp", 00:31:28.901 "traddr": "10.0.0.2", 00:31:28.901 "adrfam": "ipv4", 00:31:28.901 "trsvcid": "4420", 00:31:28.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.901 "hdgst": false, 00:31:28.901 "ddgst": false 00:31:28.901 }, 00:31:28.901 "method": "bdev_nvme_attach_controller" 00:31:28.901 }' 00:31:28.901 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:31:28.901 09:07:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:28.901 "params": { 00:31:28.901 "name": "Nvme1", 00:31:28.901 "trtype": "tcp", 00:31:28.901 "traddr": "10.0.0.2", 00:31:28.901 "adrfam": "ipv4", 00:31:28.901 "trsvcid": "4420", 00:31:28.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.901 "hdgst": false, 00:31:28.901 "ddgst": false 00:31:28.901 }, 00:31:28.901 "method": "bdev_nvme_attach_controller" 00:31:28.901 }' 00:31:28.901 [2024-11-06 09:07:41.970187] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:28.901 [2024-11-06 09:07:41.970187] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:28.901 [2024-11-06 09:07:41.970187] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:28.902 [2024-11-06 09:07:41.970272] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 09:07:41.970273] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-06 09:07:41.970273] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:28.902 --proc-type=auto ] 00:31:28.902 --proc-type=auto ] 00:31:28.902 [2024-11-06 09:07:41.970881] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:28.902 [2024-11-06 09:07:41.970952] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:28.902 [2024-11-06 09:07:42.153626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.159 [2024-11-06 09:07:42.207389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:29.159 [2024-11-06 09:07:42.249954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.159 [2024-11-06 09:07:42.304792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:29.159 [2024-11-06 09:07:42.352577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.159 [2024-11-06 09:07:42.410134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:29.159 [2024-11-06 09:07:42.428731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.417 [2024-11-06 09:07:42.479796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:29.417 Running I/O for 1 seconds... 00:31:29.417 Running I/O for 1 seconds... 00:31:29.674 Running I/O for 1 seconds... 00:31:29.674 Running I/O for 1 seconds... 00:31:30.606 198936.00 IOPS, 777.09 MiB/s 00:31:30.606 Latency(us) 00:31:30.606 [2024-11-06T08:07:43.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.606 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:30.606 Nvme1n1 : 1.00 198560.38 775.63 0.00 0.00 641.19 298.86 1856.85 00:31:30.606 [2024-11-06T08:07:43.895Z] =================================================================================================================== 00:31:30.606 [2024-11-06T08:07:43.895Z] Total : 198560.38 775.63 0.00 0.00 641.19 298.86 1856.85 00:31:30.606 6420.00 IOPS, 25.08 MiB/s 00:31:30.606 Latency(us) 00:31:30.606 [2024-11-06T08:07:43.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.606 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:30.606 Nvme1n1 : 1.02 6422.68 25.09 0.00 0.00 19763.99 3932.16 29515.47 00:31:30.606 [2024-11-06T08:07:43.895Z] =================================================================================================================== 00:31:30.606 [2024-11-06T08:07:43.895Z] Total : 6422.68 25.09 0.00 0.00 19763.99 3932.16 29515.47 00:31:30.606 7953.00 IOPS, 31.07 MiB/s 00:31:30.606 Latency(us) 00:31:30.606 [2024-11-06T08:07:43.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.606 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:30.606 Nvme1n1 : 1.01 8010.70 31.29 0.00 0.00 15896.04 5995.33 21359.88 00:31:30.606 [2024-11-06T08:07:43.895Z] =================================================================================================================== 00:31:30.606 [2024-11-06T08:07:43.895Z] Total : 8010.70 31.29 0.00 0.00 15896.04 5995.33 21359.88 00:31:30.606 6364.00 IOPS, 24.86 MiB/s 00:31:30.606 Latency(us) 00:31:30.606 [2024-11-06T08:07:43.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:30.606 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:30.606 Nvme1n1 : 1.01 6473.40 25.29 0.00 0.00 19718.61 3543.80 38641.97 00:31:30.606 [2024-11-06T08:07:43.895Z] =================================================================================================================== 00:31:30.606 [2024-11-06T08:07:43.895Z] Total : 6473.40 25.29 0.00 0.00 19718.61 3543.80 38641.97 00:31:30.863 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 968182 00:31:30.863 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 968184 00:31:30.863 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 968187 00:31:30.863 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.864 09:07:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.864 rmmod nvme_tcp 00:31:30.864 rmmod nvme_fabrics 00:31:30.864 rmmod nvme_keyring 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 968148 ']' 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 968148 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 968148 ']' 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 968148 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 968148 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 968148' 00:31:30.864 killing process with pid 968148 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 968148 00:31:30.864 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 968148 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.123 09:07:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.025 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.025 00:31:33.025 real 0m7.319s 00:31:33.025 user 0m14.466s 00:31:33.025 sys 0m4.100s 00:31:33.025 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:33.025 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:33.025 ************************************ 00:31:33.025 END TEST nvmf_bdev_io_wait 00:31:33.025 ************************************ 00:31:33.025 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:33.025 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:33.025 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:33.025 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:33.284 ************************************ 00:31:33.284 START TEST nvmf_queue_depth 00:31:33.284 ************************************ 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:33.284 * Looking for test storage... 00:31:33.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lcov --version 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:33.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.284 --rc genhtml_branch_coverage=1 00:31:33.284 --rc genhtml_function_coverage=1 00:31:33.284 --rc genhtml_legend=1 00:31:33.284 --rc geninfo_all_blocks=1 00:31:33.284 --rc geninfo_unexecuted_blocks=1 00:31:33.284 00:31:33.284 ' 00:31:33.284 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:33.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.284 --rc genhtml_branch_coverage=1 00:31:33.284 --rc genhtml_function_coverage=1 00:31:33.285 --rc genhtml_legend=1 00:31:33.285 --rc geninfo_all_blocks=1 00:31:33.285 --rc geninfo_unexecuted_blocks=1 00:31:33.285 00:31:33.285 ' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:33.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.285 --rc genhtml_branch_coverage=1 00:31:33.285 --rc genhtml_function_coverage=1 00:31:33.285 --rc genhtml_legend=1 00:31:33.285 --rc geninfo_all_blocks=1 00:31:33.285 --rc geninfo_unexecuted_blocks=1 00:31:33.285 00:31:33.285 ' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:33.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.285 --rc genhtml_branch_coverage=1 00:31:33.285 --rc genhtml_function_coverage=1 00:31:33.285 --rc genhtml_legend=1 00:31:33.285 --rc geninfo_all_blocks=1 00:31:33.285 --rc geninfo_unexecuted_blocks=1 00:31:33.285 00:31:33.285 ' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.285 09:07:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:35.189 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:35.189 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:35.189 Found net devices under 0000:09:00.0: cvl_0_0 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:35.189 Found net devices under 0000:09:00.1: cvl_0_1 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.189 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.190 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.190 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.190 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.190 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.190 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.190 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.190 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:31:35.448 00:31:35.448 --- 10.0.0.2 ping statistics --- 00:31:35.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.448 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:31:35.448 00:31:35.448 --- 10.0.0.1 ping statistics --- 00:31:35.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.448 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=970391 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 970391 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 970391 ']' 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:35.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:35.448 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.448 [2024-11-06 09:07:48.610686] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:35.448 [2024-11-06 09:07:48.611719] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:35.448 [2024-11-06 09:07:48.611774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:35.448 [2024-11-06 09:07:48.688240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.707 [2024-11-06 09:07:48.743308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:35.707 [2024-11-06 09:07:48.743372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:35.707 [2024-11-06 09:07:48.743385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:35.707 [2024-11-06 09:07:48.743396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:35.707 [2024-11-06 09:07:48.743405] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:35.707 [2024-11-06 09:07:48.744027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.707 [2024-11-06 09:07:48.829487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:35.707 [2024-11-06 09:07:48.829806] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.707 [2024-11-06 09:07:48.880600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.707 Malloc0 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.707 [2024-11-06 09:07:48.940700] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=970422 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 970422 /var/tmp/bdevperf.sock 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 970422 ']' 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:35.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:35.707 09:07:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:35.707 [2024-11-06 09:07:48.986604] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:35.707 [2024-11-06 09:07:48.986679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid970422 ] 00:31:35.965 [2024-11-06 09:07:49.052192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.965 [2024-11-06 09:07:49.108794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.965 09:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:35.965 09:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:31:35.965 09:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:35.965 09:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:35.965 09:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:36.223 NVMe0n1 00:31:36.223 09:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:36.223 09:07:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:36.486 Running I/O for 10 seconds... 00:31:38.352 8028.00 IOPS, 31.36 MiB/s [2024-11-06T08:07:53.015Z] 8192.00 IOPS, 32.00 MiB/s [2024-11-06T08:07:53.949Z] 8190.67 IOPS, 31.99 MiB/s [2024-11-06T08:07:54.882Z] 8194.00 IOPS, 32.01 MiB/s [2024-11-06T08:07:55.815Z] 8193.60 IOPS, 32.01 MiB/s [2024-11-06T08:07:56.748Z] 8218.17 IOPS, 32.10 MiB/s [2024-11-06T08:07:57.682Z] 8297.29 IOPS, 32.41 MiB/s [2024-11-06T08:07:58.615Z] 8320.88 IOPS, 32.50 MiB/s [2024-11-06T08:07:59.988Z] 8319.11 IOPS, 32.50 MiB/s [2024-11-06T08:07:59.988Z] 8360.30 IOPS, 32.66 MiB/s 00:31:46.699 Latency(us) 00:31:46.699 [2024-11-06T08:07:59.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.699 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:46.699 Verification LBA range: start 0x0 length 0x4000 00:31:46.699 NVMe0n1 : 10.13 8347.30 32.61 0.00 0.00 121581.88 21262.79 75342.13 00:31:46.699 [2024-11-06T08:07:59.988Z] =================================================================================================================== 00:31:46.699 [2024-11-06T08:07:59.988Z] Total : 8347.30 32.61 0.00 0.00 121581.88 21262.79 75342.13 00:31:46.699 { 00:31:46.699 "results": [ 00:31:46.699 { 00:31:46.699 "job": "NVMe0n1", 00:31:46.699 "core_mask": "0x1", 00:31:46.699 "workload": "verify", 00:31:46.699 "status": "finished", 00:31:46.699 "verify_range": { 00:31:46.699 "start": 0, 00:31:46.699 "length": 16384 00:31:46.699 }, 00:31:46.699 "queue_depth": 1024, 00:31:46.699 "io_size": 4096, 00:31:46.699 "runtime": 10.13441, 00:31:46.699 "iops": 8347.303888435537, 00:31:46.699 "mibps": 32.60665581420132, 00:31:46.699 "io_failed": 0, 00:31:46.699 "io_timeout": 0, 00:31:46.699 "avg_latency_us": 121581.88427429166, 00:31:46.700 "min_latency_us": 21262.79111111111, 00:31:46.700 "max_latency_us": 75342.1274074074 00:31:46.700 } 00:31:46.700 ], 00:31:46.700 "core_count": 1 00:31:46.700 } 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 970422 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 970422 ']' 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 970422 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 970422 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 970422' 00:31:46.700 killing process with pid 970422 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 970422 00:31:46.700 Received shutdown signal, test time was about 10.000000 seconds 00:31:46.700 00:31:46.700 Latency(us) 00:31:46.700 [2024-11-06T08:07:59.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.700 [2024-11-06T08:07:59.989Z] =================================================================================================================== 00:31:46.700 [2024-11-06T08:07:59.989Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:46.700 09:07:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 970422 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.957 rmmod nvme_tcp 00:31:46.957 rmmod nvme_fabrics 00:31:46.957 rmmod nvme_keyring 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 970391 ']' 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 970391 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 970391 ']' 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 970391 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 970391 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 970391' 00:31:46.957 killing process with pid 970391 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 970391 00:31:46.957 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 970391 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.215 09:08:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.748 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:49.748 00:31:49.748 real 0m16.099s 00:31:49.748 user 0m21.397s 00:31:49.748 sys 0m3.814s 00:31:49.748 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:49.748 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:49.748 ************************************ 00:31:49.748 END TEST nvmf_queue_depth 00:31:49.748 ************************************ 00:31:49.748 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:49.748 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:49.748 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:49.748 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.748 ************************************ 00:31:49.748 START TEST nvmf_target_multipath 00:31:49.748 ************************************ 00:31:49.748 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:49.748 * Looking for test storage... 00:31:49.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.748 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lcov --version 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:49.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.749 --rc genhtml_branch_coverage=1 00:31:49.749 --rc genhtml_function_coverage=1 00:31:49.749 --rc genhtml_legend=1 00:31:49.749 --rc geninfo_all_blocks=1 00:31:49.749 --rc geninfo_unexecuted_blocks=1 00:31:49.749 00:31:49.749 ' 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:49.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.749 --rc genhtml_branch_coverage=1 00:31:49.749 --rc genhtml_function_coverage=1 00:31:49.749 --rc genhtml_legend=1 00:31:49.749 --rc geninfo_all_blocks=1 00:31:49.749 --rc geninfo_unexecuted_blocks=1 00:31:49.749 00:31:49.749 ' 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:49.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.749 --rc genhtml_branch_coverage=1 00:31:49.749 --rc genhtml_function_coverage=1 00:31:49.749 --rc genhtml_legend=1 00:31:49.749 --rc geninfo_all_blocks=1 00:31:49.749 --rc geninfo_unexecuted_blocks=1 00:31:49.749 00:31:49.749 ' 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:49.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.749 --rc genhtml_branch_coverage=1 00:31:49.749 --rc genhtml_function_coverage=1 00:31:49.749 --rc genhtml_legend=1 00:31:49.749 --rc geninfo_all_blocks=1 00:31:49.749 --rc geninfo_unexecuted_blocks=1 00:31:49.749 00:31:49.749 ' 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.749 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:49.750 09:08:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:51.654 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:51.654 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:51.654 Found net devices under 0000:09:00.0: cvl_0_0 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.654 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:51.655 Found net devices under 0000:09:00.1: cvl_0_1 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:31:51.655 00:31:51.655 --- 10.0.0.2 ping statistics --- 00:31:51.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.655 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:31:51.655 00:31:51.655 --- 10.0.0.1 ping statistics --- 00:31:51.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.655 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:51.655 only one NIC for nvmf test 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.655 rmmod nvme_tcp 00:31:51.655 rmmod nvme_fabrics 00:31:51.655 rmmod nvme_keyring 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.655 09:08:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.247 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.247 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:54.247 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:54.247 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:54.248 00:31:54.248 real 0m4.483s 00:31:54.248 user 0m0.899s 00:31:54.248 sys 0m1.591s 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:54.248 ************************************ 00:31:54.248 END TEST nvmf_target_multipath 00:31:54.248 ************************************ 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:54.248 09:08:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:54.248 ************************************ 00:31:54.248 START TEST nvmf_zcopy 00:31:54.248 ************************************ 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:54.248 * Looking for test storage... 00:31:54.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lcov --version 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:31:54.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.248 --rc genhtml_branch_coverage=1 00:31:54.248 --rc genhtml_function_coverage=1 00:31:54.248 --rc genhtml_legend=1 00:31:54.248 --rc geninfo_all_blocks=1 00:31:54.248 --rc geninfo_unexecuted_blocks=1 00:31:54.248 00:31:54.248 ' 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:31:54.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.248 --rc genhtml_branch_coverage=1 00:31:54.248 --rc genhtml_function_coverage=1 00:31:54.248 --rc genhtml_legend=1 00:31:54.248 --rc geninfo_all_blocks=1 00:31:54.248 --rc geninfo_unexecuted_blocks=1 00:31:54.248 00:31:54.248 ' 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:31:54.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.248 --rc genhtml_branch_coverage=1 00:31:54.248 --rc genhtml_function_coverage=1 00:31:54.248 --rc genhtml_legend=1 00:31:54.248 --rc geninfo_all_blocks=1 00:31:54.248 --rc geninfo_unexecuted_blocks=1 00:31:54.248 00:31:54.248 ' 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:31:54.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:54.248 --rc genhtml_branch_coverage=1 00:31:54.248 --rc genhtml_function_coverage=1 00:31:54.248 --rc genhtml_legend=1 00:31:54.248 --rc geninfo_all_blocks=1 00:31:54.248 --rc geninfo_unexecuted_blocks=1 00:31:54.248 00:31:54.248 ' 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:54.248 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:54.249 09:08:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:56.152 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:56.152 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:56.152 Found net devices under 0000:09:00.0: cvl_0_0 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:56.152 Found net devices under 0000:09:00.1: cvl_0_1 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:31:56.152 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:31:56.153 00:31:56.153 --- 10.0.0.2 ping statistics --- 00:31:56.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.153 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:31:56.153 00:31:56.153 --- 10.0.0.1 ping statistics --- 00:31:56.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.153 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=975610 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 975610 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 975610 ']' 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.153 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.153 [2024-11-06 09:08:09.380036] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:56.153 [2024-11-06 09:08:09.381156] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:56.153 [2024-11-06 09:08:09.381223] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.411 [2024-11-06 09:08:09.452668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.412 [2024-11-06 09:08:09.508979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.412 [2024-11-06 09:08:09.509035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.412 [2024-11-06 09:08:09.509050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.412 [2024-11-06 09:08:09.509062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.412 [2024-11-06 09:08:09.509073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.412 [2024-11-06 09:08:09.509736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.412 [2024-11-06 09:08:09.607632] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:56.412 [2024-11-06 09:08:09.607953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.412 [2024-11-06 09:08:09.658366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.412 [2024-11-06 09:08:09.674499] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.412 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.670 malloc0 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:56.670 { 00:31:56.670 "params": { 00:31:56.670 "name": "Nvme$subsystem", 00:31:56.670 "trtype": "$TEST_TRANSPORT", 00:31:56.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.670 "adrfam": "ipv4", 00:31:56.670 "trsvcid": "$NVMF_PORT", 00:31:56.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.670 "hdgst": ${hdgst:-false}, 00:31:56.670 "ddgst": ${ddgst:-false} 00:31:56.670 }, 00:31:56.670 "method": "bdev_nvme_attach_controller" 00:31:56.670 } 00:31:56.670 EOF 00:31:56.670 )") 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:31:56.670 09:08:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:56.670 "params": { 00:31:56.670 "name": "Nvme1", 00:31:56.670 "trtype": "tcp", 00:31:56.670 "traddr": "10.0.0.2", 00:31:56.670 "adrfam": "ipv4", 00:31:56.670 "trsvcid": "4420", 00:31:56.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:56.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:56.670 "hdgst": false, 00:31:56.670 "ddgst": false 00:31:56.670 }, 00:31:56.670 "method": "bdev_nvme_attach_controller" 00:31:56.670 }' 00:31:56.670 [2024-11-06 09:08:09.766435] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:31:56.670 [2024-11-06 09:08:09.766530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid975636 ] 00:31:56.670 [2024-11-06 09:08:09.837715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.670 [2024-11-06 09:08:09.896940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.236 Running I/O for 10 seconds... 00:31:59.102 5554.00 IOPS, 43.39 MiB/s [2024-11-06T08:08:13.325Z] 5577.50 IOPS, 43.57 MiB/s [2024-11-06T08:08:14.259Z] 5583.67 IOPS, 43.62 MiB/s [2024-11-06T08:08:15.629Z] 5574.50 IOPS, 43.55 MiB/s [2024-11-06T08:08:16.562Z] 5577.20 IOPS, 43.57 MiB/s [2024-11-06T08:08:17.495Z] 5582.33 IOPS, 43.61 MiB/s [2024-11-06T08:08:18.430Z] 5585.57 IOPS, 43.64 MiB/s [2024-11-06T08:08:19.363Z] 5588.12 IOPS, 43.66 MiB/s [2024-11-06T08:08:20.354Z] 5590.11 IOPS, 43.67 MiB/s [2024-11-06T08:08:20.354Z] 5591.30 IOPS, 43.68 MiB/s 00:32:07.065 Latency(us) 00:32:07.065 [2024-11-06T08:08:20.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.065 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:07.065 Verification LBA range: start 0x0 length 0x1000 00:32:07.065 Nvme1n1 : 10.02 5593.35 43.70 0.00 0.00 22821.34 3301.07 30292.20 00:32:07.065 [2024-11-06T08:08:20.354Z] =================================================================================================================== 00:32:07.065 [2024-11-06T08:08:20.354Z] Total : 5593.35 43.70 0.00 0.00 22821.34 3301.07 30292.20 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=976936 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:07.323 { 00:32:07.323 "params": { 00:32:07.323 "name": "Nvme$subsystem", 00:32:07.323 "trtype": "$TEST_TRANSPORT", 00:32:07.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:07.323 "adrfam": "ipv4", 00:32:07.323 "trsvcid": "$NVMF_PORT", 00:32:07.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:07.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:07.323 "hdgst": ${hdgst:-false}, 00:32:07.323 "ddgst": ${ddgst:-false} 00:32:07.323 }, 00:32:07.323 "method": "bdev_nvme_attach_controller" 00:32:07.323 } 00:32:07.323 EOF 00:32:07.323 )") 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:32:07.323 [2024-11-06 09:08:20.514328] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.514366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:32:07.323 09:08:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:07.323 "params": { 00:32:07.323 "name": "Nvme1", 00:32:07.323 "trtype": "tcp", 00:32:07.323 "traddr": "10.0.0.2", 00:32:07.323 "adrfam": "ipv4", 00:32:07.323 "trsvcid": "4420", 00:32:07.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:07.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:07.323 "hdgst": false, 00:32:07.323 "ddgst": false 00:32:07.323 }, 00:32:07.323 "method": "bdev_nvme_attach_controller" 00:32:07.323 }' 00:32:07.323 [2024-11-06 09:08:20.522266] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.522287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.530238] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.530259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.538263] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.538282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.546252] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.546272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.554250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.554270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.554384] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:32:07.323 [2024-11-06 09:08:20.554442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid976936 ] 00:32:07.323 [2024-11-06 09:08:20.562250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.562269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.570248] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.570267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.578250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.578269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.586248] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.586267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.594270] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.594289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.602233] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.602252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.323 [2024-11-06 09:08:20.610253] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.323 [2024-11-06 09:08:20.610272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.618249] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.618267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.625393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:07.582 [2024-11-06 09:08:20.626248] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.626267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.634273] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.634309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.642282] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.642312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.650254] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.650273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.658234] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.658252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.666249] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.666269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.674248] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.674267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.682249] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.682268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.689358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:07.582 [2024-11-06 09:08:20.690248] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.690267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.698233] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.698252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.706275] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.706303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.714277] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.714307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.722264] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.722295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.730282] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.730314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.738263] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.738293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.746277] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.746307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.754277] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.754309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.762250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.762269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.770275] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.770304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.778265] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.778296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.786265] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.786296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.794250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.794269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.802234] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.802253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.810240] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.810264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.818255] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.818277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.826238] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.826260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.834255] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.834277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.842253] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.842275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.850257] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.850278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.858250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.858270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.582 [2024-11-06 09:08:20.866249] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.582 [2024-11-06 09:08:20.866268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.874264] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.874284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.882249] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.882268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.890253] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.890275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.898250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.898271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.906234] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.906254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.914253] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.914274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.922236] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.922256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.930254] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.930274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.938247] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.938269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.946253] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.946274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.954251] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.954271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.962251] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.962271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.970269] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.970290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:20.978253] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:20.978274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:21.025047] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:21.025076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:21.030275] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:21.030298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:21.038267] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:21.038290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 Running I/O for 5 seconds... 00:32:07.841 [2024-11-06 09:08:21.054024] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:21.054052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:21.065547] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:21.065588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:21.076966] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.841 [2024-11-06 09:08:21.076992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.841 [2024-11-06 09:08:21.092214] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.842 [2024-11-06 09:08:21.092242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.842 [2024-11-06 09:08:21.108317] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.842 [2024-11-06 09:08:21.108345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:07.842 [2024-11-06 09:08:21.123545] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:07.842 [2024-11-06 09:08:21.123573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.132996] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.133031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.145138] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.145165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.159298] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.159325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.169336] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.169369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.181751] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.181777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.193096] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.193136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.206141] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.206170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.215572] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.215597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.227129] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.227155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.238305] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.238331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.249345] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.249370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.262389] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.262416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.272703] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.272728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.284424] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.284449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.295442] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.295468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.306452] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.306478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.318007] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.318033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.329043] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.329070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.342250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.342292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.351450] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.351478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.363639] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.363665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.374679] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.374705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.100 [2024-11-06 09:08:21.385919] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.100 [2024-11-06 09:08:21.385955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.398971] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.398998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.408487] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.408514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.420643] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.420670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.431857] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.431884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.443058] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.443085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.454275] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.454302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.465518] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.465543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.478884] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.478911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.488885] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.488927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.501027] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.501055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.516434] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.516462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.534442] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.534469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.358 [2024-11-06 09:08:21.545864] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.358 [2024-11-06 09:08:21.545891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.359 [2024-11-06 09:08:21.560501] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.359 [2024-11-06 09:08:21.560536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.359 [2024-11-06 09:08:21.575408] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.359 [2024-11-06 09:08:21.575435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.359 [2024-11-06 09:08:21.585473] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.359 [2024-11-06 09:08:21.585500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.359 [2024-11-06 09:08:21.597735] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.359 [2024-11-06 09:08:21.597761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.359 [2024-11-06 09:08:21.608824] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.359 [2024-11-06 09:08:21.608861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.359 [2024-11-06 09:08:21.623971] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.359 [2024-11-06 09:08:21.624007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.359 [2024-11-06 09:08:21.633715] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.359 [2024-11-06 09:08:21.633741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.359 [2024-11-06 09:08:21.645311] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.359 [2024-11-06 09:08:21.645339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.658158] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.658186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.667576] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.667603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.680171] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.680211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.691247] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.691273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.702765] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.702789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.712789] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.712815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.725050] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.725077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.740008] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.740036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.750324] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.750350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.762200] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.762240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.773098] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.773138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.788885] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.788926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.802999] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.803026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.812385] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.812410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.827103] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.827143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.838151] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.838194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.849756] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.849802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.862930] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.862956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.872642] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.872682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.884500] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.884525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.895420] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.895445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.617 [2024-11-06 09:08:21.906234] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.617 [2024-11-06 09:08:21.906261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:21.917354] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:21.917382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:21.932611] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:21.932654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:21.945541] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:21.945568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:21.955237] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:21.955264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:21.967870] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:21.967897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:21.979072] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:21.979098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:21.990049] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:21.990075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.001451] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.001476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.014218] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.014246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.024311] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.024336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.038638] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.038678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 11313.00 IOPS, 88.38 MiB/s [2024-11-06T08:08:22.165Z] [2024-11-06 09:08:22.049529] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.049572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.062629] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.062656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.072656] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.072696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.084728] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.084754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.096945] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.096975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.112212] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.112237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.127117] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.127159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.136414] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.136441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.151297] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.151324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:08.876 [2024-11-06 09:08:22.161855] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:08.876 [2024-11-06 09:08:22.161889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.175288] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.175313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.185456] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.185482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.198050] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.198077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.212687] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.212713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.225296] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.225322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.235309] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.235335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.247819] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.247852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.258699] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.258724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.269772] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.269797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.284967] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.284992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.299339] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.299364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.308742] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.308766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.320549] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.320574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.335235] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.134 [2024-11-06 09:08:22.335273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.134 [2024-11-06 09:08:22.344988] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.135 [2024-11-06 09:08:22.345014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.135 [2024-11-06 09:08:22.356717] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.135 [2024-11-06 09:08:22.356742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.135 [2024-11-06 09:08:22.372207] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.135 [2024-11-06 09:08:22.372247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.135 [2024-11-06 09:08:22.382407] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.135 [2024-11-06 09:08:22.382433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.135 [2024-11-06 09:08:22.394166] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.135 [2024-11-06 09:08:22.394192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.135 [2024-11-06 09:08:22.405289] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.135 [2024-11-06 09:08:22.405313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.135 [2024-11-06 09:08:22.421077] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.135 [2024-11-06 09:08:22.421105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.433285] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.433310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.447448] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.447474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.456925] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.456951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.469294] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.469320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.482690] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.482715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.492145] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.492171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.504936] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.504962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.519112] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.519138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.528704] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.528730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.541026] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.541053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.556176] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.556215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.565212] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.565239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.577393] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.577419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.591164] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.591190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.600932] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.600958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.613443] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.613468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.624646] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.624670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.636089] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.636115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.646823] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.646859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.657978] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.658004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.393 [2024-11-06 09:08:22.669128] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.393 [2024-11-06 09:08:22.669169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.651 [2024-11-06 09:08:22.682853] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.651 [2024-11-06 09:08:22.682893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.651 [2024-11-06 09:08:22.692599] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.651 [2024-11-06 09:08:22.692634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.651 [2024-11-06 09:08:22.704999] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.651 [2024-11-06 09:08:22.705026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.651 [2024-11-06 09:08:22.720319] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.651 [2024-11-06 09:08:22.720342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.651 [2024-11-06 09:08:22.729975] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.651 [2024-11-06 09:08:22.730001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.651 [2024-11-06 09:08:22.742665] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.651 [2024-11-06 09:08:22.742690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.651 [2024-11-06 09:08:22.752640] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.651 [2024-11-06 09:08:22.752674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.764550] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.764576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.780617] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.780644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.793670] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.793697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.803219] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.803245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.815612] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.815638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.826488] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.826513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.837455] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.837481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.852106] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.852147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.861511] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.861537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.873920] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.873946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.884750] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.884774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.898027] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.898053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.907964] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.907990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.922502] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.922542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.652 [2024-11-06 09:08:22.932913] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.652 [2024-11-06 09:08:22.932947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.910 [2024-11-06 09:08:22.948278] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.910 [2024-11-06 09:08:22.948303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.910 [2024-11-06 09:08:22.964752] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.910 [2024-11-06 09:08:22.964778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.910 [2024-11-06 09:08:22.979779] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:22.979805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:22.989910] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:22.989944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.002309] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.002334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.013201] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.013227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.028029] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.028056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.037539] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.037565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 11364.00 IOPS, 88.78 MiB/s [2024-11-06T08:08:23.200Z] [2024-11-06 09:08:23.050187] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.050213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.061674] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.061699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.074450] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.074477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.084201] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.084227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.099482] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.099505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.110242] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.110268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.120516] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.120540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.132167] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.132194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.143577] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.143604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.154240] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.154266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.165983] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.166011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.180397] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.180422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.911 [2024-11-06 09:08:23.196224] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.911 [2024-11-06 09:08:23.196250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.206218] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.206257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.218375] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.218405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.229139] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.229181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.243787] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.243843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.253443] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.253469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.265609] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.265634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.278288] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.278314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.287932] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.287958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.300204] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.300230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.311112] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.311159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.322404] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.322430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.333999] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.334024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.346957] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.346984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.356547] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.356572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.368396] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.368421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.385140] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.169 [2024-11-06 09:08:23.385165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.169 [2024-11-06 09:08:23.398695] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.170 [2024-11-06 09:08:23.398721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.170 [2024-11-06 09:08:23.408312] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.170 [2024-11-06 09:08:23.408338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.170 [2024-11-06 09:08:23.423661] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.170 [2024-11-06 09:08:23.423701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.170 [2024-11-06 09:08:23.434887] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.170 [2024-11-06 09:08:23.434913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.170 [2024-11-06 09:08:23.446029] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.170 [2024-11-06 09:08:23.446056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.170 [2024-11-06 09:08:23.457330] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.170 [2024-11-06 09:08:23.457355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.470370] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.470395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.480062] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.480088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.495248] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.495272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.505464] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.505490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.519373] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.519400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.529438] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.529461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.541634] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.541659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.552710] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.552735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.568114] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.568155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.577989] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.578016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.590084] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.590130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.600943] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.600970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.614335] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.614360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.623943] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.623970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.636693] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.636734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.647708] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.647732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.659192] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.659217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.670293] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.670318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.681313] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.681339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.694545] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.694571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.426 [2024-11-06 09:08:23.704250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.426 [2024-11-06 09:08:23.704276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.719606] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.719631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.730460] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.730485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.741499] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.741524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.754520] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.754546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.763294] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.763320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.775683] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.775708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.786633] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.786659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.798090] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.798117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.809354] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.809383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.823960] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.823988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.833560] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.833587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.846012] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.846039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.857007] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.857035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.870712] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.870738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.880632] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.880662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.893238] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.893265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.906751] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.906788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.916559] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.916585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.928890] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.928917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.942341] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.942366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.951869] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.951896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.684 [2024-11-06 09:08:23.964495] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.684 [2024-11-06 09:08:23.964521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:23.975390] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:23.975417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:23.986325] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:23.986350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:23.997510] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:23.997536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.009361] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.009386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.024646] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.024672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.038201] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.038227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.048045] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.048072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 11374.33 IOPS, 88.86 MiB/s [2024-11-06T08:08:24.232Z] [2024-11-06 09:08:24.063010] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.063036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.073729] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.073754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.084943] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.084970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.096077] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.096104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.107198] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.107235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.118228] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.118254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.129079] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.129119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.143442] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.143468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.153208] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.153248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.165871] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.165903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.181512] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.181537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.191540] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.191565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.207279] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.207305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.218000] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.218027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.943 [2024-11-06 09:08:24.228572] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.943 [2024-11-06 09:08:24.228612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.242519] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.242544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.251872] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.251913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.263909] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.263934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.274857] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.274884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.285994] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.286020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.300949] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.300975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.315540] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.315566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.325581] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.325605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.337890] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.337924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.348693] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.348719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.364072] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.364099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.374127] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.374152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.386271] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.386297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.397146] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.397172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.411892] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.411920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.420801] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.420853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.433031] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.433058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.448772] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.448797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.461718] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.461744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.471727] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.471753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.202 [2024-11-06 09:08:24.484351] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.202 [2024-11-06 09:08:24.484379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.460 [2024-11-06 09:08:24.495856] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.460 [2024-11-06 09:08:24.495883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.460 [2024-11-06 09:08:24.507185] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.460 [2024-11-06 09:08:24.507210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.460 [2024-11-06 09:08:24.518230] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.460 [2024-11-06 09:08:24.518257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.460 [2024-11-06 09:08:24.529284] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.460 [2024-11-06 09:08:24.529309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.460 [2024-11-06 09:08:24.542668] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.460 [2024-11-06 09:08:24.542708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.460 [2024-11-06 09:08:24.552695] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.460 [2024-11-06 09:08:24.552719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.460 [2024-11-06 09:08:24.565094] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.460 [2024-11-06 09:08:24.565142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.460 [2024-11-06 09:08:24.578503] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.460 [2024-11-06 09:08:24.578529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.460 [2024-11-06 09:08:24.588714] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.460 [2024-11-06 09:08:24.588754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.600881] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.600908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.616118] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.616163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.625662] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.625687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.637538] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.637564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.650671] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.650696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.660309] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.660336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.675531] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.675557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.686282] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.686308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.697716] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.697742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.709288] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.709313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.723165] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.723191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.733069] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.733096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.461 [2024-11-06 09:08:24.745440] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.461 [2024-11-06 09:08:24.745465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.760158] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.760184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.769992] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.770019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.782403] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.782428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.793425] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.793456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.808204] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.808229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.823206] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.823232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.832945] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.832972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.845178] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.845203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.861579] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.861603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.873909] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.873935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.883663] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.883688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.895849] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.895875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.910917] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.910944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.921002] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.921029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.933758] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.933784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.947602] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.947629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.957156] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.957193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.969650] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.969675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.983361] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.983402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:24.993650] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:24.993674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.719 [2024-11-06 09:08:25.005747] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.719 [2024-11-06 09:08:25.005772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.017101] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.017143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.031914] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.031941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.041749] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.041774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 11364.75 IOPS, 88.79 MiB/s [2024-11-06T08:08:25.266Z] [2024-11-06 09:08:25.053856] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.053884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.064728] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.064754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.079653] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.079679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.089273] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.089298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.102163] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.102203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.113272] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.113296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.126883] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.126909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.977 [2024-11-06 09:08:25.136293] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.977 [2024-11-06 09:08:25.136319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.151168] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.151195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.162394] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.162417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.173254] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.173280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.186495] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.186521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.196181] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.196206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.211288] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.211313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.222072] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.222098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.233260] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.233286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.246570] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.246595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.978 [2024-11-06 09:08:25.256049] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.978 [2024-11-06 09:08:25.256076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.268271] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.268297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.279417] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.279443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.290687] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.290712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.302073] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.302099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.313338] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.313363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.326340] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.326380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.336076] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.336101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.351825] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.351861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.363008] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.363049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.374056] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.374082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.385035] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.385062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.400152] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.400178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.410100] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.410140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.422202] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.422226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.432723] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.432749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.447224] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.447250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.457256] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.457281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.469172] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.469204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.482090] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.482117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.491581] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.491608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.504178] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.504203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.236 [2024-11-06 09:08:25.515469] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.236 [2024-11-06 09:08:25.515493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.494 [2024-11-06 09:08:25.526590] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.526616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.538138] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.538181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.549499] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.549524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.562707] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.562733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.571990] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.572016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.587020] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.587047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.598754] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.598778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.609369] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.609394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.621414] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.621438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.632847] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.632871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.648162] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.648188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.663042] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.663069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.672651] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.672675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.684602] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.684643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.701046] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.701080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.716144] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.716170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.725653] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.725678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.738209] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.738234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.751627] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.751653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.761138] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.761163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.495 [2024-11-06 09:08:25.773255] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.495 [2024-11-06 09:08:25.773281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.786315] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.786342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.796218] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.796242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.809001] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.809028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.820606] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.820631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.835638] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.835664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.845569] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.845595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.857621] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.857646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.871757] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.871782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.881429] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.881454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.893468] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.893494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.906586] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.906613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.916266] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.916291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.930880] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.930914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.941126] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.941150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.952481] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.952506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.966711] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.966751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.976589] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.976614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:25.988897] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:25.988923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:26.004285] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:26.004311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:26.019109] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:26.019136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:26.028727] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:26.028753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.753 [2024-11-06 09:08:26.041321] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.753 [2024-11-06 09:08:26.041348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 11370.80 IOPS, 88.83 MiB/s [2024-11-06T08:08:26.302Z] [2024-11-06 09:08:26.055748] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.055773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.062258] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.062282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 00:32:13.013 Latency(us) 00:32:13.013 [2024-11-06T08:08:26.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:13.013 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:13.013 Nvme1n1 : 5.01 11373.45 88.86 0.00 0.00 11240.08 3046.21 19418.07 00:32:13.013 [2024-11-06T08:08:26.302Z] =================================================================================================================== 00:32:13.013 [2024-11-06T08:08:26.302Z] Total : 11373.45 88.86 0.00 0.00 11240.08 3046.21 19418.07 00:32:13.013 [2024-11-06 09:08:26.070254] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.070278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.078252] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.078275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.086258] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.086286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.094292] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.094334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.102290] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.102331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.110292] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.110331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.118308] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.118350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.126280] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.126322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.134294] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.134336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.142291] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.142330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.150291] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.150333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.162311] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.162361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.170291] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.170333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.178292] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.178335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.186294] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.186335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.194302] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.194342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.202289] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.202321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.210250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.210269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.218250] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.218270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.226234] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.226253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.234240] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.234261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.242291] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.242329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.250290] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.250330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.258278] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.258313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.266251] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.266270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.274248] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.274267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 [2024-11-06 09:08:26.282247] subsystem.c:2127:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.013 [2024-11-06 09:08:26.282266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (976936) - No such process 00:32:13.013 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 976936 00:32:13.013 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.013 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.013 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:13.013 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.013 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:13.013 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.013 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:13.271 delay0 00:32:13.271 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.271 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:13.272 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:13.272 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:13.272 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:13.272 09:08:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:13.272 [2024-11-06 09:08:26.359518] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:21.378 Initializing NVMe Controllers 00:32:21.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:21.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:21.378 Initialization complete. Launching workers. 00:32:21.378 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 238, failed: 18345 00:32:21.378 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18465, failed to submit 118 00:32:21.378 success 18372, unsuccessful 93, failed 0 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:21.378 rmmod nvme_tcp 00:32:21.378 rmmod nvme_fabrics 00:32:21.378 rmmod nvme_keyring 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 975610 ']' 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 975610 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 975610 ']' 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 975610 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 975610 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 975610' 00:32:21.378 killing process with pid 975610 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 975610 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 975610 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:21.378 09:08:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.757 00:32:22.757 real 0m28.817s 00:32:22.757 user 0m39.912s 00:32:22.757 sys 0m10.723s 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:22.757 ************************************ 00:32:22.757 END TEST nvmf_zcopy 00:32:22.757 ************************************ 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:22.757 ************************************ 00:32:22.757 START TEST nvmf_nmic 00:32:22.757 ************************************ 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:22.757 * Looking for test storage... 00:32:22.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1689 -- # lcov --version 00:32:22.757 09:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:32:22.757 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:32:22.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.758 --rc genhtml_branch_coverage=1 00:32:22.758 --rc genhtml_function_coverage=1 00:32:22.758 --rc genhtml_legend=1 00:32:22.758 --rc geninfo_all_blocks=1 00:32:22.758 --rc geninfo_unexecuted_blocks=1 00:32:22.758 00:32:22.758 ' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:32:22.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.758 --rc genhtml_branch_coverage=1 00:32:22.758 --rc genhtml_function_coverage=1 00:32:22.758 --rc genhtml_legend=1 00:32:22.758 --rc geninfo_all_blocks=1 00:32:22.758 --rc geninfo_unexecuted_blocks=1 00:32:22.758 00:32:22.758 ' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:32:22.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.758 --rc genhtml_branch_coverage=1 00:32:22.758 --rc genhtml_function_coverage=1 00:32:22.758 --rc genhtml_legend=1 00:32:22.758 --rc geninfo_all_blocks=1 00:32:22.758 --rc geninfo_unexecuted_blocks=1 00:32:22.758 00:32:22.758 ' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:32:22.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.758 --rc genhtml_branch_coverage=1 00:32:22.758 --rc genhtml_function_coverage=1 00:32:22.758 --rc genhtml_legend=1 00:32:22.758 --rc geninfo_all_blocks=1 00:32:22.758 --rc geninfo_unexecuted_blocks=1 00:32:22.758 00:32:22.758 ' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:22.758 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:22.759 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.759 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:22.759 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:22.759 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:22.759 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.759 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.759 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.017 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:23.017 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:23.017 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:23.017 09:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:24.919 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:24.919 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:24.919 Found net devices under 0000:09:00.0: cvl_0_0 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:24.919 Found net devices under 0000:09:00.1: cvl_0_1 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:24.919 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:25.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:25.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:32:25.178 00:32:25.178 --- 10.0.0.2 ping statistics --- 00:32:25.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.178 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:25.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:25.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:32:25.178 00:32:25.178 --- 10.0.0.1 ping statistics --- 00:32:25.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:25.178 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=980323 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 980323 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 980323 ']' 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:25.178 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.178 [2024-11-06 09:08:38.357744] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:25.178 [2024-11-06 09:08:38.358781] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:32:25.178 [2024-11-06 09:08:38.358852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:25.178 [2024-11-06 09:08:38.431631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:25.436 [2024-11-06 09:08:38.490351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:25.436 [2024-11-06 09:08:38.490415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:25.436 [2024-11-06 09:08:38.490428] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:25.436 [2024-11-06 09:08:38.490454] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:25.436 [2024-11-06 09:08:38.490464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:25.436 [2024-11-06 09:08:38.492010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:25.436 [2024-11-06 09:08:38.492035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:25.436 [2024-11-06 09:08:38.492093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:25.436 [2024-11-06 09:08:38.492097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.436 [2024-11-06 09:08:38.580208] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:25.436 [2024-11-06 09:08:38.580415] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:25.436 [2024-11-06 09:08:38.580736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:25.436 [2024-11-06 09:08:38.581380] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:25.436 [2024-11-06 09:08:38.581609] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:25.436 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:25.436 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:32:25.436 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:25.436 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:25.436 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.436 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:25.436 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.437 [2024-11-06 09:08:38.636801] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.437 Malloc0 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.437 [2024-11-06 09:08:38.701012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:25.437 test case1: single bdev can't be used in multiple subsystems 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.437 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.437 [2024-11-06 09:08:38.724756] bdev.c:8456:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:25.437 [2024-11-06 09:08:38.724787] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:25.437 [2024-11-06 09:08:38.724809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.695 request: 00:32:25.695 { 00:32:25.695 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:25.695 "namespace": { 00:32:25.695 "bdev_name": "Malloc0", 00:32:25.695 "no_auto_visible": false, 00:32:25.695 "no_metadata": false 00:32:25.695 }, 00:32:25.695 "method": "nvmf_subsystem_add_ns", 00:32:25.695 "req_id": 1 00:32:25.695 } 00:32:25.695 Got JSON-RPC error response 00:32:25.695 response: 00:32:25.695 { 00:32:25.695 "code": -32602, 00:32:25.695 "message": "Invalid parameters" 00:32:25.695 } 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:25.695 Adding namespace failed - expected result. 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:25.695 test case2: host connect to nvmf target in multiple paths 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:25.695 [2024-11-06 09:08:38.736857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:25.695 09:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:25.953 09:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:25.953 09:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:32:25.953 09:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:25.953 09:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:25.953 09:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:32:28.477 09:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:28.477 09:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:28.477 09:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:28.477 09:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:28.477 09:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:28.477 09:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:32:28.477 09:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:28.477 [global] 00:32:28.477 thread=1 00:32:28.477 invalidate=1 00:32:28.477 rw=write 00:32:28.477 time_based=1 00:32:28.477 runtime=1 00:32:28.477 ioengine=libaio 00:32:28.477 direct=1 00:32:28.477 bs=4096 00:32:28.477 iodepth=1 00:32:28.477 norandommap=0 00:32:28.477 numjobs=1 00:32:28.477 00:32:28.477 verify_dump=1 00:32:28.477 verify_backlog=512 00:32:28.477 verify_state_save=0 00:32:28.477 do_verify=1 00:32:28.477 verify=crc32c-intel 00:32:28.477 [job0] 00:32:28.477 filename=/dev/nvme0n1 00:32:28.477 Could not set queue depth (nvme0n1) 00:32:28.477 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:28.477 fio-3.35 00:32:28.477 Starting 1 thread 00:32:29.409 00:32:29.409 job0: (groupid=0, jobs=1): err= 0: pid=980821: Wed Nov 6 09:08:42 2024 00:32:29.409 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:29.409 slat (nsec): min=4636, max=57858, avg=9328.82, stdev=6386.07 00:32:29.409 clat (usec): min=177, max=634, avg=265.85, stdev=103.83 00:32:29.409 lat (usec): min=193, max=650, avg=275.18, stdev=107.64 00:32:29.409 clat percentiles (usec): 00:32:29.409 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 198], 00:32:29.409 | 30.00th=[ 200], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 219], 00:32:29.409 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 465], 95.00th=[ 510], 00:32:29.409 | 99.00th=[ 586], 99.50th=[ 603], 99.90th=[ 635], 99.95th=[ 635], 00:32:29.409 | 99.99th=[ 635] 00:32:29.409 write: IOPS=2311, BW=9247KiB/s (9469kB/s)(9256KiB/1001msec); 0 zone resets 00:32:29.409 slat (nsec): min=6120, max=45050, avg=8430.02, stdev=3495.64 00:32:29.409 clat (usec): min=132, max=329, avg=174.77, stdev=34.09 00:32:29.409 lat (usec): min=139, max=338, avg=183.20, stdev=35.06 00:32:29.409 clat percentiles (usec): 00:32:29.409 | 1.00th=[ 137], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:32:29.409 | 30.00th=[ 149], 40.00th=[ 159], 50.00th=[ 182], 60.00th=[ 184], 00:32:29.409 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 269], 00:32:29.409 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 310], 99.95th=[ 310], 00:32:29.409 | 99.99th=[ 330] 00:32:29.409 bw ( KiB/s): min= 9944, max= 9944, per=100.00%, avg=9944.00, stdev= 0.00, samples=1 00:32:29.409 iops : min= 2486, max= 2486, avg=2486.00, stdev= 0.00, samples=1 00:32:29.409 lat (usec) : 250=79.37%, 500=17.77%, 750=2.87% 00:32:29.409 cpu : usr=2.40%, sys=3.80%, ctx=4364, majf=0, minf=1 00:32:29.409 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:29.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.409 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:29.409 issued rwts: total=2048,2314,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:29.409 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:29.409 00:32:29.409 Run status group 0 (all jobs): 00:32:29.409 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:32:29.409 WRITE: bw=9247KiB/s (9469kB/s), 9247KiB/s-9247KiB/s (9469kB/s-9469kB/s), io=9256KiB (9478kB), run=1001-1001msec 00:32:29.409 00:32:29.409 Disk stats (read/write): 00:32:29.409 nvme0n1: ios=1822/2048, merge=0/0, ticks=577/356, in_queue=933, util=99.70% 00:32:29.409 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:29.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.697 rmmod nvme_tcp 00:32:29.697 rmmod nvme_fabrics 00:32:29.697 rmmod nvme_keyring 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 980323 ']' 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 980323 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 980323 ']' 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 980323 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 980323 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 980323' 00:32:29.697 killing process with pid 980323 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 980323 00:32:29.697 09:08:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 980323 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:29.981 09:08:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:32.516 00:32:32.516 real 0m9.344s 00:32:32.516 user 0m17.460s 00:32:32.516 sys 0m3.555s 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.516 ************************************ 00:32:32.516 END TEST nvmf_nmic 00:32:32.516 ************************************ 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:32.516 ************************************ 00:32:32.516 START TEST nvmf_fio_target 00:32:32.516 ************************************ 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:32.516 * Looking for test storage... 00:32:32.516 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lcov --version 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:32:32.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.516 --rc genhtml_branch_coverage=1 00:32:32.516 --rc genhtml_function_coverage=1 00:32:32.516 --rc genhtml_legend=1 00:32:32.516 --rc geninfo_all_blocks=1 00:32:32.516 --rc geninfo_unexecuted_blocks=1 00:32:32.516 00:32:32.516 ' 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:32:32.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.516 --rc genhtml_branch_coverage=1 00:32:32.516 --rc genhtml_function_coverage=1 00:32:32.516 --rc genhtml_legend=1 00:32:32.516 --rc geninfo_all_blocks=1 00:32:32.516 --rc geninfo_unexecuted_blocks=1 00:32:32.516 00:32:32.516 ' 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:32:32.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.516 --rc genhtml_branch_coverage=1 00:32:32.516 --rc genhtml_function_coverage=1 00:32:32.516 --rc genhtml_legend=1 00:32:32.516 --rc geninfo_all_blocks=1 00:32:32.516 --rc geninfo_unexecuted_blocks=1 00:32:32.516 00:32:32.516 ' 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:32:32.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.516 --rc genhtml_branch_coverage=1 00:32:32.516 --rc genhtml_function_coverage=1 00:32:32.516 --rc genhtml_legend=1 00:32:32.516 --rc geninfo_all_blocks=1 00:32:32.516 --rc geninfo_unexecuted_blocks=1 00:32:32.516 00:32:32.516 ' 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.516 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:32.517 09:08:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:34.419 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:34.419 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:34.419 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:34.420 Found net devices under 0000:09:00.0: cvl_0_0 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:34.420 Found net devices under 0000:09:00.1: cvl_0_1 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:34.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:34.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:32:34.420 00:32:34.420 --- 10.0.0.2 ping statistics --- 00:32:34.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.420 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:34.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:34.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:32:34.420 00:32:34.420 --- 10.0.0.1 ping statistics --- 00:32:34.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:34.420 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=982907 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 982907 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 982907 ']' 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:34.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:34.420 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.679 [2024-11-06 09:08:47.715518] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:34.679 [2024-11-06 09:08:47.716673] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:32:34.679 [2024-11-06 09:08:47.716741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:34.679 [2024-11-06 09:08:47.787375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:34.679 [2024-11-06 09:08:47.847892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:34.679 [2024-11-06 09:08:47.847961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:34.679 [2024-11-06 09:08:47.847990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:34.679 [2024-11-06 09:08:47.848002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:34.679 [2024-11-06 09:08:47.848011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:34.679 [2024-11-06 09:08:47.849584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.679 [2024-11-06 09:08:47.849670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:34.679 [2024-11-06 09:08:47.849729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.679 [2024-11-06 09:08:47.849726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:34.679 [2024-11-06 09:08:47.939090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:34.679 [2024-11-06 09:08:47.939284] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:34.679 [2024-11-06 09:08:47.939581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:34.679 [2024-11-06 09:08:47.940179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:34.679 [2024-11-06 09:08:47.940414] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:34.679 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:34.679 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:32:34.680 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:34.680 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:34.680 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:34.938 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:34.938 09:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:35.196 [2024-11-06 09:08:48.246438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:35.196 09:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:35.455 09:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:35.455 09:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:35.713 09:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:35.713 09:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:35.971 09:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:35.971 09:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:36.229 09:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:36.229 09:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:36.490 09:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:36.748 09:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:36.749 09:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:37.314 09:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:37.314 09:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:37.314 09:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:37.314 09:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:37.880 09:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:37.880 09:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:37.880 09:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:38.138 09:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:38.138 09:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:38.703 09:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.960 [2024-11-06 09:08:52.010581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.960 09:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:39.218 09:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:39.476 09:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:39.476 09:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:39.476 09:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:32:39.476 09:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:39.476 09:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:32:39.476 09:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:32:39.476 09:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:32:42.002 09:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:42.002 09:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:42.002 09:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:42.002 09:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:32:42.002 09:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:42.002 09:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:32:42.002 09:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:42.002 [global] 00:32:42.002 thread=1 00:32:42.002 invalidate=1 00:32:42.002 rw=write 00:32:42.002 time_based=1 00:32:42.002 runtime=1 00:32:42.002 ioengine=libaio 00:32:42.002 direct=1 00:32:42.002 bs=4096 00:32:42.002 iodepth=1 00:32:42.002 norandommap=0 00:32:42.002 numjobs=1 00:32:42.002 00:32:42.002 verify_dump=1 00:32:42.002 verify_backlog=512 00:32:42.002 verify_state_save=0 00:32:42.002 do_verify=1 00:32:42.002 verify=crc32c-intel 00:32:42.002 [job0] 00:32:42.002 filename=/dev/nvme0n1 00:32:42.002 [job1] 00:32:42.002 filename=/dev/nvme0n2 00:32:42.002 [job2] 00:32:42.002 filename=/dev/nvme0n3 00:32:42.002 [job3] 00:32:42.002 filename=/dev/nvme0n4 00:32:42.002 Could not set queue depth (nvme0n1) 00:32:42.002 Could not set queue depth (nvme0n2) 00:32:42.002 Could not set queue depth (nvme0n3) 00:32:42.002 Could not set queue depth (nvme0n4) 00:32:42.002 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:42.002 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:42.002 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:42.002 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:42.002 fio-3.35 00:32:42.002 Starting 4 threads 00:32:42.935 00:32:42.935 job0: (groupid=0, jobs=1): err= 0: pid=983964: Wed Nov 6 09:08:56 2024 00:32:42.935 read: IOPS=991, BW=3965KiB/s (4061kB/s)(4128KiB/1041msec) 00:32:42.935 slat (nsec): min=7258, max=55977, avg=14742.10, stdev=5872.15 00:32:42.935 clat (usec): min=220, max=41070, avg=588.02, stdev=3572.71 00:32:42.935 lat (usec): min=229, max=41078, avg=602.76, stdev=3572.69 00:32:42.935 clat percentiles (usec): 00:32:42.935 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:32:42.935 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:32:42.935 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 302], 95.00th=[ 310], 00:32:42.935 | 99.00th=[ 363], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:42.935 | 99.99th=[41157] 00:32:42.935 write: IOPS=1475, BW=5902KiB/s (6044kB/s)(6144KiB/1041msec); 0 zone resets 00:32:42.935 slat (usec): min=7, max=40917, avg=62.20, stdev=1243.38 00:32:42.935 clat (usec): min=135, max=3144, avg=202.47, stdev=79.59 00:32:42.935 lat (usec): min=147, max=41136, avg=264.67, stdev=1246.44 00:32:42.935 clat percentiles (usec): 00:32:42.935 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 176], 00:32:42.935 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:32:42.935 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 241], 00:32:42.935 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 469], 99.95th=[ 3130], 00:32:42.935 | 99.99th=[ 3130] 00:32:42.935 bw ( KiB/s): min= 4096, max= 8192, per=31.23%, avg=6144.00, stdev=2896.31, samples=2 00:32:42.935 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:32:42.935 lat (usec) : 250=64.45%, 500=35.20% 00:32:42.935 lat (msec) : 4=0.04%, 50=0.31% 00:32:42.935 cpu : usr=3.65%, sys=4.62%, ctx=2573, majf=0, minf=1 00:32:42.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.935 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:42.935 job1: (groupid=0, jobs=1): err= 0: pid=983965: Wed Nov 6 09:08:56 2024 00:32:42.935 read: IOPS=1000, BW=4004KiB/s (4100kB/s)(4128KiB/1031msec) 00:32:42.935 slat (nsec): min=6626, max=66068, avg=12993.57, stdev=5720.87 00:32:42.935 clat (usec): min=216, max=42063, avg=652.45, stdev=3842.02 00:32:42.935 lat (usec): min=223, max=42077, avg=665.44, stdev=3842.11 00:32:42.935 clat percentiles (usec): 00:32:42.935 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 253], 00:32:42.935 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:32:42.935 | 70.00th=[ 306], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 420], 00:32:42.935 | 99.00th=[ 502], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:32:42.935 | 99.99th=[42206] 00:32:42.935 write: IOPS=1489, BW=5959KiB/s (6102kB/s)(6144KiB/1031msec); 0 zone resets 00:32:42.935 slat (nsec): min=8235, max=57214, avg=18582.36, stdev=7419.85 00:32:42.935 clat (usec): min=149, max=1027, avg=197.67, stdev=35.19 00:32:42.936 lat (usec): min=162, max=1070, avg=216.25, stdev=36.89 00:32:42.936 clat percentiles (usec): 00:32:42.936 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 184], 00:32:42.936 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:32:42.936 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 233], 00:32:42.936 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 1029], 99.95th=[ 1029], 00:32:42.936 | 99.99th=[ 1029] 00:32:42.936 bw ( KiB/s): min= 4096, max= 8192, per=31.23%, avg=6144.00, stdev=2896.31, samples=2 00:32:42.936 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:32:42.936 lat (usec) : 250=65.38%, 500=34.11%, 750=0.08% 00:32:42.936 lat (msec) : 2=0.08%, 50=0.35% 00:32:42.936 cpu : usr=3.88%, sys=4.56%, ctx=2569, majf=0, minf=1 00:32:42.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.936 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:42.936 job2: (groupid=0, jobs=1): err= 0: pid=983966: Wed Nov 6 09:08:56 2024 00:32:42.936 read: IOPS=856, BW=3427KiB/s (3509kB/s)(3492KiB/1019msec) 00:32:42.936 slat (nsec): min=5134, max=79750, avg=22251.73, stdev=11476.75 00:32:42.936 clat (usec): min=254, max=41081, avg=785.25, stdev=4325.77 00:32:42.936 lat (usec): min=265, max=41089, avg=807.50, stdev=4324.86 00:32:42.936 clat percentiles (usec): 00:32:42.936 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:32:42.936 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 330], 00:32:42.936 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 383], 00:32:42.936 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:42.936 | 99.99th=[41157] 00:32:42.936 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:32:42.936 slat (usec): min=7, max=40803, avg=82.97, stdev=1536.53 00:32:42.936 clat (usec): min=166, max=382, avg=213.60, stdev=30.13 00:32:42.936 lat (usec): min=175, max=41052, avg=296.57, stdev=1537.93 00:32:42.936 clat percentiles (usec): 00:32:42.936 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:32:42.936 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 212], 00:32:42.936 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 265], 00:32:42.936 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 379], 99.95th=[ 383], 00:32:42.936 | 99.99th=[ 383] 00:32:42.936 bw ( KiB/s): min= 856, max= 7336, per=20.82%, avg=4096.00, stdev=4582.05, samples=2 00:32:42.936 iops : min= 214, max= 1834, avg=1024.00, stdev=1145.51, samples=2 00:32:42.936 lat (usec) : 250=50.29%, 500=49.18% 00:32:42.936 lat (msec) : 50=0.53% 00:32:42.936 cpu : usr=1.77%, sys=3.83%, ctx=1900, majf=0, minf=1 00:32:42.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.936 issued rwts: total=873,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:42.936 job3: (groupid=0, jobs=1): err= 0: pid=983967: Wed Nov 6 09:08:56 2024 00:32:42.936 read: IOPS=500, BW=2004KiB/s (2052kB/s)(2076KiB/1036msec) 00:32:42.936 slat (nsec): min=8404, max=48255, avg=17472.38, stdev=5160.15 00:32:42.936 clat (usec): min=240, max=41978, avg=1512.33, stdev=6855.49 00:32:42.936 lat (usec): min=250, max=41996, avg=1529.80, stdev=6855.49 00:32:42.936 clat percentiles (usec): 00:32:42.936 | 1.00th=[ 255], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 293], 00:32:42.936 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:32:42.936 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 355], 95.00th=[ 469], 00:32:42.936 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:42.936 | 99.99th=[42206] 00:32:42.936 write: IOPS=988, BW=3954KiB/s (4049kB/s)(4096KiB/1036msec); 0 zone resets 00:32:42.936 slat (usec): min=9, max=12510, avg=27.98, stdev=390.52 00:32:42.936 clat (usec): min=151, max=287, avg=201.51, stdev=28.86 00:32:42.936 lat (usec): min=161, max=12724, avg=229.49, stdev=392.33 00:32:42.936 clat percentiles (usec): 00:32:42.936 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 176], 00:32:42.936 | 30.00th=[ 186], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 208], 00:32:42.936 | 70.00th=[ 217], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 251], 00:32:42.936 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 289], 00:32:42.936 | 99.99th=[ 289] 00:32:42.936 bw ( KiB/s): min= 4096, max= 4096, per=20.82%, avg=4096.00, stdev= 0.00, samples=2 00:32:42.936 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:32:42.936 lat (usec) : 250=62.67%, 500=36.23%, 750=0.06% 00:32:42.936 lat (msec) : 20=0.06%, 50=0.97% 00:32:42.936 cpu : usr=1.84%, sys=3.00%, ctx=1545, majf=0, minf=1 00:32:42.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.936 issued rwts: total=519,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:42.936 00:32:42.936 Run status group 0 (all jobs): 00:32:42.936 READ: bw=13.0MiB/s (13.6MB/s), 2004KiB/s-4004KiB/s (2052kB/s-4100kB/s), io=13.5MiB (14.2MB), run=1019-1041msec 00:32:42.936 WRITE: bw=19.2MiB/s (20.1MB/s), 3954KiB/s-5959KiB/s (4049kB/s-6102kB/s), io=20.0MiB (21.0MB), run=1019-1041msec 00:32:42.936 00:32:42.936 Disk stats (read/write): 00:32:42.936 nvme0n1: ios=1079/1536, merge=0/0, ticks=1271/279, in_queue=1550, util=87.47% 00:32:42.936 nvme0n2: ios=1077/1536, merge=0/0, ticks=514/283, in_queue=797, util=91.26% 00:32:42.936 nvme0n3: ios=893/1024, merge=0/0, ticks=1379/209, in_queue=1588, util=95.10% 00:32:42.936 nvme0n4: ios=571/1024, merge=0/0, ticks=701/186, in_queue=887, util=94.34% 00:32:42.936 09:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:42.936 [global] 00:32:42.936 thread=1 00:32:42.936 invalidate=1 00:32:42.936 rw=randwrite 00:32:42.936 time_based=1 00:32:42.936 runtime=1 00:32:42.936 ioengine=libaio 00:32:42.936 direct=1 00:32:42.936 bs=4096 00:32:42.936 iodepth=1 00:32:42.936 norandommap=0 00:32:42.936 numjobs=1 00:32:42.936 00:32:42.936 verify_dump=1 00:32:42.936 verify_backlog=512 00:32:42.936 verify_state_save=0 00:32:42.936 do_verify=1 00:32:42.936 verify=crc32c-intel 00:32:42.936 [job0] 00:32:42.936 filename=/dev/nvme0n1 00:32:42.936 [job1] 00:32:42.936 filename=/dev/nvme0n2 00:32:42.936 [job2] 00:32:42.936 filename=/dev/nvme0n3 00:32:42.936 [job3] 00:32:42.936 filename=/dev/nvme0n4 00:32:43.194 Could not set queue depth (nvme0n1) 00:32:43.194 Could not set queue depth (nvme0n2) 00:32:43.194 Could not set queue depth (nvme0n3) 00:32:43.194 Could not set queue depth (nvme0n4) 00:32:43.194 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:43.194 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:43.194 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:43.194 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:43.194 fio-3.35 00:32:43.194 Starting 4 threads 00:32:44.566 00:32:44.566 job0: (groupid=0, jobs=1): err= 0: pid=984194: Wed Nov 6 09:08:57 2024 00:32:44.566 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:32:44.566 slat (nsec): min=7171, max=34882, avg=22059.86, stdev=8250.02 00:32:44.566 clat (usec): min=40907, max=41007, avg=40967.58, stdev=26.25 00:32:44.566 lat (usec): min=40938, max=41026, avg=40989.64, stdev=23.23 00:32:44.566 clat percentiles (usec): 00:32:44.566 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:44.566 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:44.566 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:44.566 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:44.566 | 99.99th=[41157] 00:32:44.566 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:32:44.566 slat (nsec): min=5746, max=31280, avg=7712.68, stdev=3388.12 00:32:44.566 clat (usec): min=151, max=507, avg=215.31, stdev=39.87 00:32:44.566 lat (usec): min=159, max=514, avg=223.02, stdev=40.17 00:32:44.566 clat percentiles (usec): 00:32:44.566 | 1.00th=[ 161], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:32:44.566 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 215], 60.00th=[ 227], 00:32:44.566 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 258], 95.00th=[ 273], 00:32:44.566 | 99.00th=[ 379], 99.50th=[ 383], 99.90th=[ 506], 99.95th=[ 506], 00:32:44.566 | 99.99th=[ 506] 00:32:44.566 bw ( KiB/s): min= 4087, max= 4087, per=29.08%, avg=4087.00, stdev= 0.00, samples=1 00:32:44.566 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:32:44.566 lat (usec) : 250=83.15%, 500=12.55%, 750=0.19% 00:32:44.566 lat (msec) : 50=4.12% 00:32:44.566 cpu : usr=0.20%, sys=0.30%, ctx=535, majf=0, minf=2 00:32:44.566 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.566 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:44.566 job1: (groupid=0, jobs=1): err= 0: pid=984195: Wed Nov 6 09:08:57 2024 00:32:44.566 read: IOPS=1788, BW=7153KiB/s (7325kB/s)(7160KiB/1001msec) 00:32:44.566 slat (nsec): min=5932, max=65074, avg=12579.91, stdev=5972.69 00:32:44.566 clat (usec): min=204, max=715, avg=279.45, stdev=53.03 00:32:44.566 lat (usec): min=211, max=736, avg=292.03, stdev=56.04 00:32:44.566 clat percentiles (usec): 00:32:44.566 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 229], 20.00th=[ 243], 00:32:44.566 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 281], 00:32:44.566 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[ 351], 00:32:44.566 | 99.00th=[ 515], 99.50th=[ 570], 99.90th=[ 660], 99.95th=[ 717], 00:32:44.566 | 99.99th=[ 717] 00:32:44.567 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:44.567 slat (usec): min=7, max=23744, avg=28.28, stdev=524.36 00:32:44.567 clat (usec): min=146, max=1435, avg=196.71, stdev=44.28 00:32:44.567 lat (usec): min=156, max=23933, avg=224.99, stdev=526.16 00:32:44.567 clat percentiles (usec): 00:32:44.567 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:32:44.567 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:32:44.567 | 70.00th=[ 206], 80.00th=[ 223], 90.00th=[ 231], 95.00th=[ 241], 00:32:44.567 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 408], 99.95th=[ 1237], 00:32:44.567 | 99.99th=[ 1434] 00:32:44.567 bw ( KiB/s): min= 8192, max= 8192, per=58.29%, avg=8192.00, stdev= 0.00, samples=1 00:32:44.567 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:44.567 lat (usec) : 250=63.37%, 500=36.09%, 750=0.50% 00:32:44.567 lat (msec) : 2=0.05% 00:32:44.567 cpu : usr=4.10%, sys=7.70%, ctx=3840, majf=0, minf=1 00:32:44.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.567 issued rwts: total=1790,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:44.567 job2: (groupid=0, jobs=1): err= 0: pid=984196: Wed Nov 6 09:08:57 2024 00:32:44.567 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:32:44.567 slat (nsec): min=7603, max=36564, avg=22552.59, stdev=8546.60 00:32:44.567 clat (usec): min=292, max=41032, avg=39109.61, stdev=8670.36 00:32:44.567 lat (usec): min=326, max=41050, avg=39132.17, stdev=8667.77 00:32:44.567 clat percentiles (usec): 00:32:44.567 | 1.00th=[ 293], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:44.567 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:44.567 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:44.567 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:44.567 | 99.99th=[41157] 00:32:44.567 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:32:44.567 slat (nsec): min=6294, max=31340, avg=10546.36, stdev=4545.42 00:32:44.567 clat (usec): min=149, max=546, avg=291.80, stdev=88.72 00:32:44.567 lat (usec): min=156, max=556, avg=302.34, stdev=90.74 00:32:44.567 clat percentiles (usec): 00:32:44.567 | 1.00th=[ 159], 5.00th=[ 198], 10.00th=[ 215], 20.00th=[ 227], 00:32:44.567 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 262], 00:32:44.567 | 70.00th=[ 338], 80.00th=[ 392], 90.00th=[ 429], 95.00th=[ 465], 00:32:44.567 | 99.00th=[ 498], 99.50th=[ 510], 99.90th=[ 545], 99.95th=[ 545], 00:32:44.567 | 99.99th=[ 545] 00:32:44.567 bw ( KiB/s): min= 4087, max= 4087, per=29.08%, avg=4087.00, stdev= 0.00, samples=1 00:32:44.567 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:32:44.567 lat (usec) : 250=50.19%, 500=45.13%, 750=0.75% 00:32:44.567 lat (msec) : 50=3.93% 00:32:44.567 cpu : usr=0.10%, sys=0.59%, ctx=537, majf=0, minf=1 00:32:44.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.567 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:44.567 job3: (groupid=0, jobs=1): err= 0: pid=984197: Wed Nov 6 09:08:57 2024 00:32:44.567 read: IOPS=258, BW=1035KiB/s (1060kB/s)(1056KiB/1020msec) 00:32:44.567 slat (nsec): min=5961, max=50203, avg=11762.12, stdev=7112.82 00:32:44.567 clat (usec): min=220, max=42003, avg=3214.86, stdev=10500.79 00:32:44.567 lat (usec): min=226, max=42023, avg=3226.62, stdev=10503.87 00:32:44.567 clat percentiles (usec): 00:32:44.567 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 249], 00:32:44.567 | 30.00th=[ 260], 40.00th=[ 281], 50.00th=[ 375], 60.00th=[ 424], 00:32:44.567 | 70.00th=[ 494], 80.00th=[ 545], 90.00th=[ 594], 95.00th=[42206], 00:32:44.567 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:44.567 | 99.99th=[42206] 00:32:44.567 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:32:44.567 slat (nsec): min=7433, max=39734, avg=12144.41, stdev=4904.83 00:32:44.567 clat (usec): min=145, max=1211, avg=310.97, stdev=100.16 00:32:44.567 lat (usec): min=153, max=1228, avg=323.12, stdev=102.39 00:32:44.567 clat percentiles (usec): 00:32:44.567 | 1.00th=[ 159], 5.00th=[ 198], 10.00th=[ 217], 20.00th=[ 235], 00:32:44.567 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 273], 60.00th=[ 326], 00:32:44.567 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 429], 95.00th=[ 461], 00:32:44.567 | 99.00th=[ 537], 99.50th=[ 644], 99.90th=[ 1205], 99.95th=[ 1205], 00:32:44.567 | 99.99th=[ 1205] 00:32:44.567 bw ( KiB/s): min= 4087, max= 4087, per=29.08%, avg=4087.00, stdev= 0.00, samples=1 00:32:44.567 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:32:44.567 lat (usec) : 250=33.25%, 500=55.03%, 750=9.15% 00:32:44.567 lat (msec) : 2=0.13%, 4=0.13%, 50=2.32% 00:32:44.567 cpu : usr=0.39%, sys=1.37%, ctx=777, majf=0, minf=1 00:32:44.567 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.567 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.567 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.567 issued rwts: total=264,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.567 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:44.567 00:32:44.567 Run status group 0 (all jobs): 00:32:44.567 READ: bw=8227KiB/s (8425kB/s), 86.4KiB/s-7153KiB/s (88.5kB/s-7325kB/s), io=8392KiB (8593kB), run=1001-1020msec 00:32:44.567 WRITE: bw=13.7MiB/s (14.4MB/s), 2008KiB/s-8184KiB/s (2056kB/s-8380kB/s), io=14.0MiB (14.7MB), run=1001-1020msec 00:32:44.567 00:32:44.567 Disk stats (read/write): 00:32:44.567 nvme0n1: ios=67/512, merge=0/0, ticks=720/104, in_queue=824, util=86.77% 00:32:44.567 nvme0n2: ios=1560/1700, merge=0/0, ticks=928/322, in_queue=1250, util=100.00% 00:32:44.567 nvme0n3: ios=76/512, merge=0/0, ticks=876/147, in_queue=1023, util=93.74% 00:32:44.567 nvme0n4: ios=285/512, merge=0/0, ticks=1621/159, in_queue=1780, util=98.11% 00:32:44.567 09:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:44.567 [global] 00:32:44.567 thread=1 00:32:44.567 invalidate=1 00:32:44.567 rw=write 00:32:44.567 time_based=1 00:32:44.567 runtime=1 00:32:44.567 ioengine=libaio 00:32:44.567 direct=1 00:32:44.567 bs=4096 00:32:44.567 iodepth=128 00:32:44.567 norandommap=0 00:32:44.567 numjobs=1 00:32:44.567 00:32:44.567 verify_dump=1 00:32:44.567 verify_backlog=512 00:32:44.567 verify_state_save=0 00:32:44.567 do_verify=1 00:32:44.567 verify=crc32c-intel 00:32:44.567 [job0] 00:32:44.567 filename=/dev/nvme0n1 00:32:44.567 [job1] 00:32:44.567 filename=/dev/nvme0n2 00:32:44.567 [job2] 00:32:44.567 filename=/dev/nvme0n3 00:32:44.567 [job3] 00:32:44.567 filename=/dev/nvme0n4 00:32:44.567 Could not set queue depth (nvme0n1) 00:32:44.567 Could not set queue depth (nvme0n2) 00:32:44.567 Could not set queue depth (nvme0n3) 00:32:44.567 Could not set queue depth (nvme0n4) 00:32:44.825 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:44.825 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:44.825 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:44.825 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:44.825 fio-3.35 00:32:44.825 Starting 4 threads 00:32:46.200 00:32:46.200 job0: (groupid=0, jobs=1): err= 0: pid=984430: Wed Nov 6 09:08:59 2024 00:32:46.200 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:32:46.200 slat (usec): min=3, max=4211, avg=89.25, stdev=427.35 00:32:46.200 clat (usec): min=8116, max=16337, avg=11577.76, stdev=1190.10 00:32:46.200 lat (usec): min=8190, max=16354, avg=11667.01, stdev=1224.30 00:32:46.200 clat percentiles (usec): 00:32:46.200 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10552], 00:32:46.200 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:32:46.200 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:32:46.200 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15533], 99.95th=[15926], 00:32:46.200 | 99.99th=[16319] 00:32:46.200 write: IOPS=5428, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1004msec); 0 zone resets 00:32:46.200 slat (usec): min=4, max=11721, avg=92.56, stdev=492.71 00:32:46.200 clat (usec): min=3074, max=28443, avg=12409.68, stdev=3227.85 00:32:46.200 lat (usec): min=3687, max=28450, avg=12502.24, stdev=3244.18 00:32:46.200 clat percentiles (usec): 00:32:46.200 | 1.00th=[ 7701], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11076], 00:32:46.200 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:32:46.200 | 70.00th=[12387], 80.00th=[12780], 90.00th=[14484], 95.00th=[17171], 00:32:46.200 | 99.00th=[27919], 99.50th=[27919], 99.90th=[28443], 99.95th=[28443], 00:32:46.200 | 99.99th=[28443] 00:32:46.200 bw ( KiB/s): min=20232, max=22396, per=31.13%, avg=21314.00, stdev=1530.18, samples=2 00:32:46.200 iops : min= 5058, max= 5599, avg=5328.50, stdev=382.54, samples=2 00:32:46.200 lat (msec) : 4=0.09%, 10=6.90%, 20=90.64%, 50=2.37% 00:32:46.200 cpu : usr=5.08%, sys=8.18%, ctx=574, majf=0, minf=2 00:32:46.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:46.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:46.200 issued rwts: total=5120,5450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:46.200 job1: (groupid=0, jobs=1): err= 0: pid=984431: Wed Nov 6 09:08:59 2024 00:32:46.200 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:32:46.200 slat (usec): min=2, max=13411, avg=106.45, stdev=727.40 00:32:46.200 clat (usec): min=2433, max=34986, avg=13632.11, stdev=3896.09 00:32:46.200 lat (usec): min=2436, max=34990, avg=13738.56, stdev=3931.96 00:32:46.200 clat percentiles (usec): 00:32:46.200 | 1.00th=[ 6718], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11207], 00:32:46.200 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[13304], 00:32:46.200 | 70.00th=[14222], 80.00th=[15008], 90.00th=[18220], 95.00th=[20579], 00:32:46.200 | 99.00th=[29230], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:32:46.200 | 99.99th=[34866] 00:32:46.200 write: IOPS=5054, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1003msec); 0 zone resets 00:32:46.200 slat (usec): min=3, max=11997, avg=86.01, stdev=635.67 00:32:46.200 clat (usec): min=254, max=37604, avg=12586.58, stdev=3702.12 00:32:46.200 lat (usec): min=2097, max=37625, avg=12672.59, stdev=3721.35 00:32:46.200 clat percentiles (usec): 00:32:46.200 | 1.00th=[ 3490], 5.00th=[ 6783], 10.00th=[ 8455], 20.00th=[10552], 00:32:46.200 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12256], 60.00th=[12780], 00:32:46.200 | 70.00th=[13173], 80.00th=[14222], 90.00th=[17171], 95.00th=[20579], 00:32:46.200 | 99.00th=[23725], 99.50th=[25560], 99.90th=[37487], 99.95th=[37487], 00:32:46.200 | 99.99th=[37487] 00:32:46.200 bw ( KiB/s): min=19056, max=20480, per=28.87%, avg=19768.00, stdev=1006.92, samples=2 00:32:46.200 iops : min= 4764, max= 5120, avg=4942.00, stdev=251.73, samples=2 00:32:46.200 lat (usec) : 500=0.01% 00:32:46.200 lat (msec) : 4=1.07%, 10=9.69%, 20=82.60%, 50=6.62% 00:32:46.200 cpu : usr=2.79%, sys=4.69%, ctx=401, majf=0, minf=1 00:32:46.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:46.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:46.200 issued rwts: total=4608,5070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:46.200 job2: (groupid=0, jobs=1): err= 0: pid=984432: Wed Nov 6 09:08:59 2024 00:32:46.200 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:32:46.200 slat (usec): min=3, max=14478, avg=141.79, stdev=816.62 00:32:46.200 clat (usec): min=4984, max=54323, avg=18725.01, stdev=9735.11 00:32:46.200 lat (usec): min=4991, max=64260, avg=18866.80, stdev=9811.80 00:32:46.200 clat percentiles (usec): 00:32:46.200 | 1.00th=[10028], 5.00th=[11469], 10.00th=[12780], 20.00th=[13173], 00:32:46.200 | 30.00th=[13566], 40.00th=[15008], 50.00th=[15926], 60.00th=[16909], 00:32:46.200 | 70.00th=[17695], 80.00th=[18220], 90.00th=[36963], 95.00th=[45351], 00:32:46.200 | 99.00th=[52691], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:32:46.200 | 99.99th=[54264] 00:32:46.200 write: IOPS=3082, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1005msec); 0 zone resets 00:32:46.200 slat (usec): min=4, max=23075, avg=175.65, stdev=1125.44 00:32:46.200 clat (usec): min=465, max=97615, avg=22400.27, stdev=17259.26 00:32:46.200 lat (usec): min=4676, max=97632, avg=22575.92, stdev=17368.71 00:32:46.200 clat percentiles (usec): 00:32:46.200 | 1.00th=[ 9372], 5.00th=[12780], 10.00th=[13042], 20.00th=[13566], 00:32:46.200 | 30.00th=[13829], 40.00th=[14353], 50.00th=[15533], 60.00th=[16057], 00:32:46.200 | 70.00th=[17171], 80.00th=[28443], 90.00th=[46924], 95.00th=[59507], 00:32:46.200 | 99.00th=[96994], 99.50th=[96994], 99.90th=[98042], 99.95th=[98042], 00:32:46.200 | 99.99th=[98042] 00:32:46.200 bw ( KiB/s): min=12288, max=12288, per=17.95%, avg=12288.00, stdev= 0.00, samples=2 00:32:46.200 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:32:46.200 lat (usec) : 500=0.02% 00:32:46.200 lat (msec) : 10=1.04%, 20=79.14%, 50=14.39%, 100=5.41% 00:32:46.200 cpu : usr=2.89%, sys=5.28%, ctx=340, majf=0, minf=1 00:32:46.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:46.200 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.200 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:46.200 issued rwts: total=3072,3098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.200 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:46.200 job3: (groupid=0, jobs=1): err= 0: pid=984433: Wed Nov 6 09:08:59 2024 00:32:46.200 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1002msec) 00:32:46.200 slat (usec): min=3, max=23581, avg=144.66, stdev=943.81 00:32:46.200 clat (usec): min=655, max=72476, avg=17668.23, stdev=11834.64 00:32:46.200 lat (usec): min=2008, max=72480, avg=17812.89, stdev=11906.42 00:32:46.200 clat percentiles (usec): 00:32:46.200 | 1.00th=[ 3392], 5.00th=[ 4293], 10.00th=[ 7046], 20.00th=[12780], 00:32:46.200 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14615], 60.00th=[14877], 00:32:46.200 | 70.00th=[16909], 80.00th=[18744], 90.00th=[29230], 95.00th=[48497], 00:32:46.200 | 99.00th=[60031], 99.50th=[65799], 99.90th=[72877], 99.95th=[72877], 00:32:46.200 | 99.99th=[72877] 00:32:46.200 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:32:46.200 slat (usec): min=4, max=23261, avg=146.49, stdev=948.12 00:32:46.200 clat (usec): min=441, max=118869, avg=20196.71, stdev=17855.37 00:32:46.200 lat (usec): min=453, max=118877, avg=20343.20, stdev=17919.15 00:32:46.200 clat percentiles (msec): 00:32:46.200 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 12], 00:32:46.200 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:32:46.200 | 70.00th=[ 16], 80.00th=[ 25], 90.00th=[ 46], 95.00th=[ 64], 00:32:46.200 | 99.00th=[ 90], 99.50th=[ 116], 99.90th=[ 120], 99.95th=[ 120], 00:32:46.200 | 99.99th=[ 120] 00:32:46.200 bw ( KiB/s): min=14304, max=14368, per=20.94%, avg=14336.00, stdev=45.25, samples=2 00:32:46.200 iops : min= 3576, max= 3592, avg=3584.00, stdev=11.31, samples=2 00:32:46.200 lat (usec) : 500=0.05%, 750=0.02% 00:32:46.200 lat (msec) : 2=0.36%, 4=3.66%, 10=9.70%, 20=66.20%, 50=14.28% 00:32:46.200 lat (msec) : 100=5.42%, 250=0.33% 00:32:46.200 cpu : usr=2.80%, sys=6.09%, ctx=351, majf=0, minf=2 00:32:46.200 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:46.201 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:46.201 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:46.201 issued rwts: total=3063,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:46.201 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:46.201 00:32:46.201 Run status group 0 (all jobs): 00:32:46.201 READ: bw=61.7MiB/s (64.7MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=62.0MiB (65.0MB), run=1002-1005msec 00:32:46.201 WRITE: bw=66.9MiB/s (70.1MB/s), 12.0MiB/s-21.2MiB/s (12.6MB/s-22.2MB/s), io=67.2MiB (70.5MB), run=1002-1005msec 00:32:46.201 00:32:46.201 Disk stats (read/write): 00:32:46.201 nvme0n1: ios=4439/4608, merge=0/0, ticks=16929/17944, in_queue=34873, util=85.87% 00:32:46.201 nvme0n2: ios=4106/4096, merge=0/0, ticks=41919/37390, in_queue=79309, util=89.85% 00:32:46.201 nvme0n3: ios=2445/2560, merge=0/0, ticks=15064/19893, in_queue=34957, util=95.11% 00:32:46.201 nvme0n4: ios=2328/3072, merge=0/0, ticks=18191/31513, in_queue=49704, util=95.49% 00:32:46.201 09:08:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:46.201 [global] 00:32:46.201 thread=1 00:32:46.201 invalidate=1 00:32:46.201 rw=randwrite 00:32:46.201 time_based=1 00:32:46.201 runtime=1 00:32:46.201 ioengine=libaio 00:32:46.201 direct=1 00:32:46.201 bs=4096 00:32:46.201 iodepth=128 00:32:46.201 norandommap=0 00:32:46.201 numjobs=1 00:32:46.201 00:32:46.201 verify_dump=1 00:32:46.201 verify_backlog=512 00:32:46.201 verify_state_save=0 00:32:46.201 do_verify=1 00:32:46.201 verify=crc32c-intel 00:32:46.201 [job0] 00:32:46.201 filename=/dev/nvme0n1 00:32:46.201 [job1] 00:32:46.201 filename=/dev/nvme0n2 00:32:46.201 [job2] 00:32:46.201 filename=/dev/nvme0n3 00:32:46.201 [job3] 00:32:46.201 filename=/dev/nvme0n4 00:32:46.201 Could not set queue depth (nvme0n1) 00:32:46.201 Could not set queue depth (nvme0n2) 00:32:46.201 Could not set queue depth (nvme0n3) 00:32:46.201 Could not set queue depth (nvme0n4) 00:32:46.201 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:46.201 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:46.201 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:46.201 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:46.201 fio-3.35 00:32:46.201 Starting 4 threads 00:32:47.574 00:32:47.574 job0: (groupid=0, jobs=1): err= 0: pid=984669: Wed Nov 6 09:09:00 2024 00:32:47.574 read: IOPS=2820, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1006msec) 00:32:47.574 slat (usec): min=3, max=22296, avg=141.19, stdev=985.10 00:32:47.574 clat (usec): min=3909, max=51132, avg=16917.61, stdev=7012.67 00:32:47.574 lat (usec): min=6766, max=51138, avg=17058.80, stdev=7056.50 00:32:47.574 clat percentiles (usec): 00:32:47.574 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[12125], 00:32:47.574 | 30.00th=[13173], 40.00th=[14222], 50.00th=[15270], 60.00th=[16057], 00:32:47.574 | 70.00th=[17957], 80.00th=[21103], 90.00th=[24511], 95.00th=[30016], 00:32:47.574 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:32:47.574 | 99.99th=[51119] 00:32:47.574 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:32:47.574 slat (usec): min=4, max=22026, avg=182.34, stdev=1247.24 00:32:47.574 clat (usec): min=826, max=113524, avg=25919.45, stdev=22157.63 00:32:47.574 lat (usec): min=837, max=114837, avg=26101.79, stdev=22280.18 00:32:47.574 clat percentiles (msec): 00:32:47.574 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 13], 00:32:47.574 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 21], 60.00th=[ 23], 00:32:47.574 | 70.00th=[ 24], 80.00th=[ 31], 90.00th=[ 61], 95.00th=[ 85], 00:32:47.574 | 99.00th=[ 110], 99.50th=[ 112], 99.90th=[ 114], 99.95th=[ 114], 00:32:47.574 | 99.99th=[ 114] 00:32:47.574 bw ( KiB/s): min=11000, max=13576, per=19.93%, avg=12288.00, stdev=1821.51, samples=2 00:32:47.574 iops : min= 2750, max= 3394, avg=3072.00, stdev=455.38, samples=2 00:32:47.574 lat (usec) : 1000=0.07% 00:32:47.574 lat (msec) : 2=0.20%, 4=0.02%, 10=12.44%, 20=49.30%, 50=31.22% 00:32:47.574 lat (msec) : 100=5.70%, 250=1.05% 00:32:47.574 cpu : usr=2.59%, sys=4.58%, ctx=278, majf=0, minf=1 00:32:47.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:32:47.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:47.574 issued rwts: total=2837,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:47.574 job1: (groupid=0, jobs=1): err= 0: pid=984678: Wed Nov 6 09:09:00 2024 00:32:47.574 read: IOPS=4132, BW=16.1MiB/s (16.9MB/s)(16.3MiB/1010msec) 00:32:47.574 slat (usec): min=2, max=19584, avg=108.73, stdev=806.19 00:32:47.574 clat (usec): min=3291, max=45637, avg=14539.32, stdev=6162.22 00:32:47.574 lat (usec): min=7505, max=45643, avg=14648.05, stdev=6203.42 00:32:47.574 clat percentiles (usec): 00:32:47.574 | 1.00th=[ 8356], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[10159], 00:32:47.574 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12780], 60.00th=[14484], 00:32:47.574 | 70.00th=[15795], 80.00th=[17433], 90.00th=[20055], 95.00th=[26084], 00:32:47.574 | 99.00th=[42206], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:32:47.574 | 99.99th=[45876] 00:32:47.574 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:32:47.574 slat (usec): min=4, max=23320, avg=106.76, stdev=898.12 00:32:47.574 clat (usec): min=1202, max=63216, avg=14600.08, stdev=8402.07 00:32:47.574 lat (usec): min=2239, max=63228, avg=14706.84, stdev=8486.18 00:32:47.574 clat percentiles (usec): 00:32:47.574 | 1.00th=[ 5669], 5.00th=[ 7767], 10.00th=[ 8225], 20.00th=[10290], 00:32:47.574 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12518], 60.00th=[13042], 00:32:47.574 | 70.00th=[13829], 80.00th=[15139], 90.00th=[21627], 95.00th=[35914], 00:32:47.574 | 99.00th=[44303], 99.50th=[44827], 99.90th=[56886], 99.95th=[56886], 00:32:47.574 | 99.99th=[63177] 00:32:47.574 bw ( KiB/s): min=15984, max=20480, per=29.57%, avg=18232.00, stdev=3179.15, samples=2 00:32:47.574 iops : min= 3996, max= 5120, avg=4558.00, stdev=794.79, samples=2 00:32:47.574 lat (msec) : 2=0.01%, 4=0.24%, 10=17.70%, 20=71.87%, 50=9.92% 00:32:47.574 lat (msec) : 100=0.26% 00:32:47.574 cpu : usr=5.05%, sys=7.14%, ctx=340, majf=0, minf=1 00:32:47.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:47.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:47.574 issued rwts: total=4174,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:47.574 job2: (groupid=0, jobs=1): err= 0: pid=984714: Wed Nov 6 09:09:00 2024 00:32:47.574 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:32:47.574 slat (usec): min=3, max=17045, avg=127.19, stdev=938.79 00:32:47.574 clat (usec): min=5633, max=64352, avg=16327.54, stdev=8901.46 00:32:47.574 lat (usec): min=5638, max=64359, avg=16454.72, stdev=8978.21 00:32:47.574 clat percentiles (usec): 00:32:47.574 | 1.00th=[ 6718], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[10159], 00:32:47.574 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13173], 60.00th=[14091], 00:32:47.574 | 70.00th=[17433], 80.00th=[20841], 90.00th=[27919], 95.00th=[33424], 00:32:47.574 | 99.00th=[50070], 99.50th=[59507], 99.90th=[64226], 99.95th=[64226], 00:32:47.574 | 99.99th=[64226] 00:32:47.574 write: IOPS=3295, BW=12.9MiB/s (13.5MB/s)(13.1MiB/1014msec); 0 zone resets 00:32:47.574 slat (usec): min=4, max=32928, avg=174.32, stdev=1260.59 00:32:47.574 clat (msec): min=6, max=115, avg=23.43, stdev=20.15 00:32:47.574 lat (msec): min=6, max=115, avg=23.60, stdev=20.26 00:32:47.574 clat percentiles (msec): 00:32:47.574 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 10], 00:32:47.574 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 17], 60.00th=[ 21], 00:32:47.574 | 70.00th=[ 23], 80.00th=[ 37], 90.00th=[ 53], 95.00th=[ 60], 00:32:47.574 | 99.00th=[ 103], 99.50th=[ 108], 99.90th=[ 115], 99.95th=[ 115], 00:32:47.574 | 99.99th=[ 115] 00:32:47.574 bw ( KiB/s): min= 9328, max=16384, per=20.85%, avg=12856.00, stdev=4989.35, samples=2 00:32:47.574 iops : min= 2332, max= 4096, avg=3214.00, stdev=1247.34, samples=2 00:32:47.574 lat (msec) : 10=20.17%, 20=47.33%, 50=25.85%, 100=5.91%, 250=0.73% 00:32:47.574 cpu : usr=3.95%, sys=5.23%, ctx=209, majf=0, minf=1 00:32:47.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:47.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:47.574 issued rwts: total=3072,3342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:47.574 job3: (groupid=0, jobs=1): err= 0: pid=984725: Wed Nov 6 09:09:00 2024 00:32:47.574 read: IOPS=4501, BW=17.6MiB/s (18.4MB/s)(17.7MiB/1008msec) 00:32:47.574 slat (usec): min=3, max=14952, avg=99.83, stdev=793.58 00:32:47.574 clat (usec): min=1267, max=33873, avg=13634.30, stdev=4587.65 00:32:47.574 lat (usec): min=1282, max=33879, avg=13734.13, stdev=4625.87 00:32:47.574 clat percentiles (usec): 00:32:47.574 | 1.00th=[ 7111], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9765], 00:32:47.574 | 30.00th=[10683], 40.00th=[12125], 50.00th=[13173], 60.00th=[13960], 00:32:47.574 | 70.00th=[15401], 80.00th=[16712], 90.00th=[20055], 95.00th=[22152], 00:32:47.574 | 99.00th=[26608], 99.50th=[29492], 99.90th=[32637], 99.95th=[33817], 00:32:47.574 | 99.99th=[33817] 00:32:47.574 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:32:47.574 slat (usec): min=4, max=12522, avg=107.69, stdev=804.13 00:32:47.574 clat (usec): min=3216, max=55299, avg=14280.32, stdev=7047.35 00:32:47.574 lat (usec): min=3227, max=55308, avg=14388.00, stdev=7105.66 00:32:47.574 clat percentiles (usec): 00:32:47.574 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8979], 20.00th=[10159], 00:32:47.574 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[13173], 00:32:47.574 | 70.00th=[13960], 80.00th=[16319], 90.00th=[19530], 95.00th=[23725], 00:32:47.574 | 99.00th=[48497], 99.50th=[52167], 99.90th=[55313], 99.95th=[55313], 00:32:47.574 | 99.99th=[55313] 00:32:47.574 bw ( KiB/s): min=17968, max=18896, per=29.89%, avg=18432.00, stdev=656.20, samples=2 00:32:47.574 iops : min= 4492, max= 4724, avg=4608.00, stdev=164.05, samples=2 00:32:47.574 lat (msec) : 2=0.36%, 4=0.07%, 10=19.17%, 20=70.30%, 50=9.76% 00:32:47.574 lat (msec) : 100=0.34% 00:32:47.574 cpu : usr=4.87%, sys=4.97%, ctx=242, majf=0, minf=1 00:32:47.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:47.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:47.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:47.575 issued rwts: total=4538,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:47.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:47.575 00:32:47.575 Run status group 0 (all jobs): 00:32:47.575 READ: bw=56.3MiB/s (59.1MB/s), 11.0MiB/s-17.6MiB/s (11.6MB/s-18.4MB/s), io=57.1MiB (59.9MB), run=1006-1014msec 00:32:47.575 WRITE: bw=60.2MiB/s (63.1MB/s), 11.9MiB/s-17.9MiB/s (12.5MB/s-18.7MB/s), io=61.1MiB (64.0MB), run=1006-1014msec 00:32:47.575 00:32:47.575 Disk stats (read/write): 00:32:47.575 nvme0n1: ios=2185/2560, merge=0/0, ticks=23122/44027, in_queue=67149, util=86.57% 00:32:47.575 nvme0n2: ios=3635/4068, merge=0/0, ticks=45781/49791, in_queue=95572, util=97.87% 00:32:47.575 nvme0n3: ios=2705/3072, merge=0/0, ticks=44011/57597, in_queue=101608, util=97.81% 00:32:47.575 nvme0n4: ios=3638/4096, merge=0/0, ticks=46902/57408, in_queue=104310, util=100.00% 00:32:47.575 09:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:47.575 09:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=984947 00:32:47.575 09:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:47.575 09:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:47.575 [global] 00:32:47.575 thread=1 00:32:47.575 invalidate=1 00:32:47.575 rw=read 00:32:47.575 time_based=1 00:32:47.575 runtime=10 00:32:47.575 ioengine=libaio 00:32:47.575 direct=1 00:32:47.575 bs=4096 00:32:47.575 iodepth=1 00:32:47.575 norandommap=1 00:32:47.575 numjobs=1 00:32:47.575 00:32:47.575 [job0] 00:32:47.575 filename=/dev/nvme0n1 00:32:47.575 [job1] 00:32:47.575 filename=/dev/nvme0n2 00:32:47.575 [job2] 00:32:47.575 filename=/dev/nvme0n3 00:32:47.575 [job3] 00:32:47.575 filename=/dev/nvme0n4 00:32:47.575 Could not set queue depth (nvme0n1) 00:32:47.575 Could not set queue depth (nvme0n2) 00:32:47.575 Could not set queue depth (nvme0n3) 00:32:47.575 Could not set queue depth (nvme0n4) 00:32:47.575 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.575 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.575 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.575 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:47.575 fio-3.35 00:32:47.575 Starting 4 threads 00:32:50.853 09:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:50.853 09:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:50.853 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=299008, buflen=4096 00:32:50.853 fio: pid=985061, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:51.110 09:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:51.110 09:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:51.110 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=622592, buflen=4096 00:32:51.111 fio: pid=985060, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:51.368 09:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:51.368 09:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:51.368 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=18825216, buflen=4096 00:32:51.368 fio: pid=985058, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:51.626 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=56381440, buflen=4096 00:32:51.626 fio: pid=985059, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:51.626 09:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:51.626 09:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:51.626 00:32:51.626 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=985058: Wed Nov 6 09:09:04 2024 00:32:51.626 read: IOPS=1304, BW=5218KiB/s (5344kB/s)(18.0MiB/3523msec) 00:32:51.627 slat (usec): min=5, max=7775, avg=12.70, stdev=114.71 00:32:51.627 clat (usec): min=202, max=44977, avg=745.69, stdev=4424.77 00:32:51.627 lat (usec): min=208, max=44994, avg=756.69, stdev=4425.26 00:32:51.627 clat percentiles (usec): 00:32:51.627 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:32:51.627 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:32:51.627 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 375], 95.00th=[ 510], 00:32:51.627 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:51.627 | 99.99th=[44827] 00:32:51.627 bw ( KiB/s): min= 96, max=13568, per=26.26%, avg=5124.00, stdev=6143.68, samples=6 00:32:51.627 iops : min= 24, max= 3392, avg=1281.00, stdev=1535.92, samples=6 00:32:51.627 lat (usec) : 250=54.69%, 500=39.81%, 750=4.33% 00:32:51.627 lat (msec) : 50=1.15% 00:32:51.627 cpu : usr=1.16%, sys=2.04%, ctx=4599, majf=0, minf=2 00:32:51.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:51.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.627 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.627 issued rwts: total=4597,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:51.627 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=985059: Wed Nov 6 09:09:04 2024 00:32:51.627 read: IOPS=3613, BW=14.1MiB/s (14.8MB/s)(53.8MiB/3810msec) 00:32:51.627 slat (usec): min=5, max=15746, avg=12.23, stdev=191.12 00:32:51.627 clat (usec): min=178, max=12037, avg=261.55, stdev=117.98 00:32:51.627 lat (usec): min=184, max=16036, avg=273.78, stdev=225.59 00:32:51.627 clat percentiles (usec): 00:32:51.627 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:32:51.627 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 255], 00:32:51.627 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 359], 00:32:51.627 | 99.00th=[ 594], 99.50th=[ 652], 99.90th=[ 775], 99.95th=[ 914], 00:32:51.627 | 99.99th=[ 1303] 00:32:51.627 bw ( KiB/s): min=12000, max=15976, per=73.62%, avg=14366.14, stdev=1283.23, samples=7 00:32:51.627 iops : min= 3000, max= 3994, avg=3591.43, stdev=320.83, samples=7 00:32:51.627 lat (usec) : 250=55.83%, 500=42.64%, 750=1.39%, 1000=0.09% 00:32:51.627 lat (msec) : 2=0.03%, 20=0.01% 00:32:51.627 cpu : usr=2.07%, sys=4.94%, ctx=13772, majf=0, minf=2 00:32:51.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:51.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.627 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.627 issued rwts: total=13766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:51.627 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=985060: Wed Nov 6 09:09:04 2024 00:32:51.627 read: IOPS=47, BW=187KiB/s (192kB/s)(608KiB/3244msec) 00:32:51.627 slat (nsec): min=6108, max=34745, avg=14418.83, stdev=7362.18 00:32:51.627 clat (usec): min=225, max=41505, avg=21169.71, stdev=20345.17 00:32:51.627 lat (usec): min=232, max=41529, avg=21184.13, stdev=20345.66 00:32:51.627 clat percentiles (usec): 00:32:51.627 | 1.00th=[ 229], 5.00th=[ 245], 10.00th=[ 277], 20.00th=[ 306], 00:32:51.627 | 30.00th=[ 375], 40.00th=[ 429], 50.00th=[40633], 60.00th=[40633], 00:32:51.627 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:51.627 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:32:51.627 | 99.99th=[41681] 00:32:51.627 bw ( KiB/s): min= 120, max= 352, per=0.99%, avg=193.33, stdev=82.12, samples=6 00:32:51.627 iops : min= 30, max= 88, avg=48.33, stdev=20.53, samples=6 00:32:51.627 lat (usec) : 250=5.23%, 500=41.18%, 750=1.31%, 1000=0.65% 00:32:51.627 lat (msec) : 50=50.98% 00:32:51.627 cpu : usr=0.15%, sys=0.00%, ctx=153, majf=0, minf=2 00:32:51.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:51.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.627 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.627 issued rwts: total=153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:51.627 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=985061: Wed Nov 6 09:09:04 2024 00:32:51.627 read: IOPS=25, BW=99.4KiB/s (102kB/s)(292KiB/2938msec) 00:32:51.627 slat (nsec): min=13245, max=59345, avg=18830.70, stdev=8802.13 00:32:51.627 clat (usec): min=363, max=42981, avg=39899.48, stdev=6678.45 00:32:51.627 lat (usec): min=399, max=42998, avg=39918.38, stdev=6673.56 00:32:51.627 clat percentiles (usec): 00:32:51.627 | 1.00th=[ 363], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:51.627 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:51.627 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:51.627 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:32:51.627 | 99.99th=[42730] 00:32:51.627 bw ( KiB/s): min= 96, max= 104, per=0.51%, avg=99.20, stdev= 4.38, samples=5 00:32:51.627 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:32:51.627 lat (usec) : 500=2.70% 00:32:51.627 lat (msec) : 50=95.95% 00:32:51.627 cpu : usr=0.10%, sys=0.00%, ctx=75, majf=0, minf=1 00:32:51.627 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:51.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.627 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:51.627 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:51.627 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:51.627 00:32:51.627 Run status group 0 (all jobs): 00:32:51.627 READ: bw=19.1MiB/s (20.0MB/s), 99.4KiB/s-14.1MiB/s (102kB/s-14.8MB/s), io=72.6MiB (76.1MB), run=2938-3810msec 00:32:51.627 00:32:51.627 Disk stats (read/write): 00:32:51.627 nvme0n1: ios=4037/0, merge=0/0, ticks=3235/0, in_queue=3235, util=95.97% 00:32:51.627 nvme0n2: ios=12920/0, merge=0/0, ticks=3304/0, in_queue=3304, util=95.39% 00:32:51.627 nvme0n3: ios=149/0, merge=0/0, ticks=3097/0, in_queue=3097, util=96.82% 00:32:51.627 nvme0n4: ios=93/0, merge=0/0, ticks=3181/0, in_queue=3181, util=99.49% 00:32:51.885 09:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:51.885 09:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:52.143 09:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:52.143 09:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:52.401 09:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:52.401 09:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:52.659 09:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:52.659 09:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:53.223 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:53.223 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 984947 00:32:53.223 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:53.223 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:53.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:53.223 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:53.223 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:32:53.224 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:53.224 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:53.224 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:53.224 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:53.224 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:32:53.224 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:53.224 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:53.224 nvmf hotplug test: fio failed as expected 00:32:53.224 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:53.481 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:53.481 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:53.481 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:53.481 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:53.481 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:53.481 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:53.482 rmmod nvme_tcp 00:32:53.482 rmmod nvme_fabrics 00:32:53.482 rmmod nvme_keyring 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 982907 ']' 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 982907 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 982907 ']' 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 982907 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 982907 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 982907' 00:32:53.482 killing process with pid 982907 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 982907 00:32:53.482 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 982907 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:53.740 09:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.271 09:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:56.271 00:32:56.271 real 0m23.719s 00:32:56.271 user 1m7.536s 00:32:56.271 sys 0m10.101s 00:32:56.271 09:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:56.271 09:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:56.271 ************************************ 00:32:56.271 END TEST nvmf_fio_target 00:32:56.271 ************************************ 00:32:56.271 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:56.272 ************************************ 00:32:56.272 START TEST nvmf_bdevio 00:32:56.272 ************************************ 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:56.272 * Looking for test storage... 00:32:56.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lcov --version 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:32:56.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.272 --rc genhtml_branch_coverage=1 00:32:56.272 --rc genhtml_function_coverage=1 00:32:56.272 --rc genhtml_legend=1 00:32:56.272 --rc geninfo_all_blocks=1 00:32:56.272 --rc geninfo_unexecuted_blocks=1 00:32:56.272 00:32:56.272 ' 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:32:56.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.272 --rc genhtml_branch_coverage=1 00:32:56.272 --rc genhtml_function_coverage=1 00:32:56.272 --rc genhtml_legend=1 00:32:56.272 --rc geninfo_all_blocks=1 00:32:56.272 --rc geninfo_unexecuted_blocks=1 00:32:56.272 00:32:56.272 ' 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:32:56.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.272 --rc genhtml_branch_coverage=1 00:32:56.272 --rc genhtml_function_coverage=1 00:32:56.272 --rc genhtml_legend=1 00:32:56.272 --rc geninfo_all_blocks=1 00:32:56.272 --rc geninfo_unexecuted_blocks=1 00:32:56.272 00:32:56.272 ' 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:32:56.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:56.272 --rc genhtml_branch_coverage=1 00:32:56.272 --rc genhtml_function_coverage=1 00:32:56.272 --rc genhtml_legend=1 00:32:56.272 --rc geninfo_all_blocks=1 00:32:56.272 --rc geninfo_unexecuted_blocks=1 00:32:56.272 00:32:56.272 ' 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:56.272 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:56.273 09:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:58.174 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.174 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:58.175 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:58.175 Found net devices under 0000:09:00.0: cvl_0_0 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:58.175 Found net devices under 0000:09:00.1: cvl_0_1 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:58.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:58.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:32:58.175 00:32:58.175 --- 10.0.0.2 ping statistics --- 00:32:58.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.175 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:58.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:58.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:32:58.175 00:32:58.175 --- 10.0.0.1 ping statistics --- 00:32:58.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:58.175 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=988251 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 988251 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 988251 ']' 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:58.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:58.175 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:58.432 [2024-11-06 09:09:11.504751] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:58.432 [2024-11-06 09:09:11.505824] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:32:58.433 [2024-11-06 09:09:11.505903] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:58.433 [2024-11-06 09:09:11.576071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:58.433 [2024-11-06 09:09:11.630365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:58.433 [2024-11-06 09:09:11.630428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:58.433 [2024-11-06 09:09:11.630457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:58.433 [2024-11-06 09:09:11.630468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:58.433 [2024-11-06 09:09:11.630477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:58.433 [2024-11-06 09:09:11.632103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:58.433 [2024-11-06 09:09:11.632162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:58.433 [2024-11-06 09:09:11.632258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:58.433 [2024-11-06 09:09:11.632261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:58.691 [2024-11-06 09:09:11.722733] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:58.691 [2024-11-06 09:09:11.722959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:58.691 [2024-11-06 09:09:11.723234] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:58.691 [2024-11-06 09:09:11.723966] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:58.691 [2024-11-06 09:09:11.724224] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:58.691 [2024-11-06 09:09:11.780991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:58.691 Malloc0 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:58.691 [2024-11-06 09:09:11.849090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:58.691 { 00:32:58.691 "params": { 00:32:58.691 "name": "Nvme$subsystem", 00:32:58.691 "trtype": "$TEST_TRANSPORT", 00:32:58.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:58.691 "adrfam": "ipv4", 00:32:58.691 "trsvcid": "$NVMF_PORT", 00:32:58.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:58.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:58.691 "hdgst": ${hdgst:-false}, 00:32:58.691 "ddgst": ${ddgst:-false} 00:32:58.691 }, 00:32:58.691 "method": "bdev_nvme_attach_controller" 00:32:58.691 } 00:32:58.691 EOF 00:32:58.691 )") 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:32:58.691 09:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:58.691 "params": { 00:32:58.691 "name": "Nvme1", 00:32:58.691 "trtype": "tcp", 00:32:58.691 "traddr": "10.0.0.2", 00:32:58.691 "adrfam": "ipv4", 00:32:58.691 "trsvcid": "4420", 00:32:58.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:58.692 "hdgst": false, 00:32:58.692 "ddgst": false 00:32:58.692 }, 00:32:58.692 "method": "bdev_nvme_attach_controller" 00:32:58.692 }' 00:32:58.692 [2024-11-06 09:09:11.901533] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:32:58.692 [2024-11-06 09:09:11.901606] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid988312 ] 00:32:58.692 [2024-11-06 09:09:11.974599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:58.950 [2024-11-06 09:09:12.040341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:58.950 [2024-11-06 09:09:12.040391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:58.950 [2024-11-06 09:09:12.040395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.208 I/O targets: 00:32:59.208 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:59.208 00:32:59.208 00:32:59.208 CUnit - A unit testing framework for C - Version 2.1-3 00:32:59.208 http://cunit.sourceforge.net/ 00:32:59.208 00:32:59.208 00:32:59.208 Suite: bdevio tests on: Nvme1n1 00:32:59.208 Test: blockdev write read block ...passed 00:32:59.208 Test: blockdev write zeroes read block ...passed 00:32:59.208 Test: blockdev write zeroes read no split ...passed 00:32:59.208 Test: blockdev write zeroes read split ...passed 00:32:59.208 Test: blockdev write zeroes read split partial ...passed 00:32:59.208 Test: blockdev reset ...[2024-11-06 09:09:12.360017] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:59.208 [2024-11-06 09:09:12.360118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2136640 (9): Bad file descriptor 00:32:59.208 [2024-11-06 09:09:12.411914] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:59.208 passed 00:32:59.208 Test: blockdev write read 8 blocks ...passed 00:32:59.208 Test: blockdev write read size > 128k ...passed 00:32:59.208 Test: blockdev write read invalid size ...passed 00:32:59.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:59.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:59.208 Test: blockdev write read max offset ...passed 00:32:59.465 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:59.465 Test: blockdev writev readv 8 blocks ...passed 00:32:59.465 Test: blockdev writev readv 30 x 1block ...passed 00:32:59.465 Test: blockdev writev readv block ...passed 00:32:59.465 Test: blockdev writev readv size > 128k ...passed 00:32:59.465 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:59.466 Test: blockdev comparev and writev ...[2024-11-06 09:09:12.624198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:59.466 [2024-11-06 09:09:12.624234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.624259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:59.466 [2024-11-06 09:09:12.624277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.624665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:59.466 [2024-11-06 09:09:12.624692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.624715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:59.466 [2024-11-06 09:09:12.624730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.625119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:59.466 [2024-11-06 09:09:12.625145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.625167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:59.466 [2024-11-06 09:09:12.625182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.625551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:59.466 [2024-11-06 09:09:12.625576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.625598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:59.466 [2024-11-06 09:09:12.625614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:59.466 passed 00:32:59.466 Test: blockdev nvme passthru rw ...passed 00:32:59.466 Test: blockdev nvme passthru vendor specific ...[2024-11-06 09:09:12.708085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:59.466 [2024-11-06 09:09:12.708113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.708278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:59.466 [2024-11-06 09:09:12.708302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.708453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:59.466 [2024-11-06 09:09:12.708477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:59.466 [2024-11-06 09:09:12.708628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:59.466 [2024-11-06 09:09:12.708651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:59.466 passed 00:32:59.466 Test: blockdev nvme admin passthru ...passed 00:32:59.724 Test: blockdev copy ...passed 00:32:59.724 00:32:59.724 Run Summary: Type Total Ran Passed Failed Inactive 00:32:59.724 suites 1 1 n/a 0 0 00:32:59.724 tests 23 23 23 0 0 00:32:59.724 asserts 152 152 152 0 n/a 00:32:59.724 00:32:59.724 Elapsed time = 1.013 seconds 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:59.724 09:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:59.724 rmmod nvme_tcp 00:32:59.724 rmmod nvme_fabrics 00:32:59.724 rmmod nvme_keyring 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 988251 ']' 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 988251 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 988251 ']' 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 988251 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 988251 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 988251' 00:32:59.982 killing process with pid 988251 00:32:59.982 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 988251 00:32:59.983 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 988251 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:00.242 09:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.152 09:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:02.152 00:33:02.152 real 0m6.308s 00:33:02.152 user 0m7.981s 00:33:02.152 sys 0m2.453s 00:33:02.152 09:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:02.152 09:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:02.152 ************************************ 00:33:02.152 END TEST nvmf_bdevio 00:33:02.152 ************************************ 00:33:02.152 09:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:02.152 00:33:02.152 real 3m55.005s 00:33:02.152 user 8m50.215s 00:33:02.152 sys 1m26.507s 00:33:02.152 09:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:02.152 09:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:02.152 ************************************ 00:33:02.152 END TEST nvmf_target_core_interrupt_mode 00:33:02.152 ************************************ 00:33:02.152 09:09:15 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:02.152 09:09:15 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:02.152 09:09:15 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:02.152 09:09:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:02.152 ************************************ 00:33:02.152 START TEST nvmf_interrupt 00:33:02.152 ************************************ 00:33:02.152 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:02.411 * Looking for test storage... 00:33:02.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1689 -- # lcov --version 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:33:02.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.411 --rc genhtml_branch_coverage=1 00:33:02.411 --rc genhtml_function_coverage=1 00:33:02.411 --rc genhtml_legend=1 00:33:02.411 --rc geninfo_all_blocks=1 00:33:02.411 --rc geninfo_unexecuted_blocks=1 00:33:02.411 00:33:02.411 ' 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:33:02.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.411 --rc genhtml_branch_coverage=1 00:33:02.411 --rc genhtml_function_coverage=1 00:33:02.411 --rc genhtml_legend=1 00:33:02.411 --rc geninfo_all_blocks=1 00:33:02.411 --rc geninfo_unexecuted_blocks=1 00:33:02.411 00:33:02.411 ' 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:33:02.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.411 --rc genhtml_branch_coverage=1 00:33:02.411 --rc genhtml_function_coverage=1 00:33:02.411 --rc genhtml_legend=1 00:33:02.411 --rc geninfo_all_blocks=1 00:33:02.411 --rc geninfo_unexecuted_blocks=1 00:33:02.411 00:33:02.411 ' 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:33:02.411 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.411 --rc genhtml_branch_coverage=1 00:33:02.411 --rc genhtml_function_coverage=1 00:33:02.411 --rc genhtml_legend=1 00:33:02.411 --rc geninfo_all_blocks=1 00:33:02.411 --rc geninfo_unexecuted_blocks=1 00:33:02.411 00:33:02.411 ' 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:02.411 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:02.412 09:09:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:04.945 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:04.945 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:04.945 Found net devices under 0000:09:00.0: cvl_0_0 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:04.945 Found net devices under 0000:09:00.1: cvl_0_1 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:04.945 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:04.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:04.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:33:04.946 00:33:04.946 --- 10.0.0.2 ping statistics --- 00:33:04.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.946 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:04.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:04.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:33:04.946 00:33:04.946 --- 10.0.0.1 ping statistics --- 00:33:04.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.946 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=990482 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 990482 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 990482 ']' 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:04.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:04.946 09:09:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:04.946 [2024-11-06 09:09:17.954564] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:04.946 [2024-11-06 09:09:17.955608] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:33:04.946 [2024-11-06 09:09:17.955668] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:04.946 [2024-11-06 09:09:18.023583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:04.946 [2024-11-06 09:09:18.076674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:04.946 [2024-11-06 09:09:18.076747] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:04.946 [2024-11-06 09:09:18.076776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:04.946 [2024-11-06 09:09:18.076787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:04.946 [2024-11-06 09:09:18.076797] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:04.946 [2024-11-06 09:09:18.078233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.946 [2024-11-06 09:09:18.078238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.946 [2024-11-06 09:09:18.161036] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:04.946 [2024-11-06 09:09:18.161072] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:04.946 [2024-11-06 09:09:18.161345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:04.946 5000+0 records in 00:33:04.946 5000+0 records out 00:33:04.946 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0142808 s, 717 MB/s 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:04.946 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.204 AIO0 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.204 [2024-11-06 09:09:18.258949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.204 [2024-11-06 09:09:18.287144] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 990482 0 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 990482 0 idle 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=990482 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:05.204 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 990482 -w 256 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 990482 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.26 reactor_0' 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 990482 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.26 reactor_0 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 990482 1 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 990482 1 idle 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=990482 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 990482 -w 256 00:33:05.205 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 990493 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 990493 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=990535 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 990482 0 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 990482 0 busy 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=990482 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 990482 -w 256 00:33:05.490 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 990482 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:00.47 reactor_0' 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 990482 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:00.47 reactor_0 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 990482 1 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 990482 1 busy 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=990482 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 990482 -w 256 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 990493 root 20 0 128.2g 47616 34176 R 93.8 0.1 0:00.27 reactor_1' 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 990493 root 20 0 128.2g 47616 34176 R 93.8 0.1 0:00.27 reactor_1 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.8 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:05.791 09:09:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 990535 00:33:15.781 [2024-11-06 09:09:28.730551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ea850 is same with the state(6) to be set 00:33:15.781 [2024-11-06 09:09:28.730619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ea850 is same with the state(6) to be set 00:33:15.781 Initializing NVMe Controllers 00:33:15.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:15.781 Controller IO queue size 256, less than required. 00:33:15.781 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:15.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:15.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:15.781 Initialization complete. Launching workers. 00:33:15.781 ======================================================== 00:33:15.781 Latency(us) 00:33:15.781 Device Information : IOPS MiB/s Average min max 00:33:15.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14092.50 55.05 18179.00 4155.93 58648.94 00:33:15.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 12888.10 50.34 19877.86 4461.41 22807.55 00:33:15.781 ======================================================== 00:33:15.781 Total : 26980.60 105.39 18990.51 4155.93 58648.94 00:33:15.781 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 990482 0 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 990482 0 idle 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=990482 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 990482 -w 256 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 990482 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:20.21 reactor_0' 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 990482 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:20.21 reactor_0 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 990482 1 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 990482 1 idle 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=990482 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 990482 -w 256 00:33:15.781 09:09:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:16.039 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 990493 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:09.98 reactor_1' 00:33:16.039 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 990493 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:09.98 reactor_1 00:33:16.040 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:16.040 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:16.040 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:16.040 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:16.040 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:16.040 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:16.040 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:16.040 09:09:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:16.040 09:09:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:16.297 09:09:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:16.297 09:09:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:33:16.297 09:09:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:16.297 09:09:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:16.297 09:09:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 990482 0 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 990482 0 idle 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=990482 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 990482 -w 256 00:33:18.197 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 990482 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:20.30 reactor_0' 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 990482 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:20.30 reactor_0 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 990482 1 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 990482 1 idle 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=990482 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:18.455 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 990482 -w 256 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 990493 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:10.01 reactor_1' 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 990493 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:10.01 reactor_1 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:18.456 09:09:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:18.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:18.714 rmmod nvme_tcp 00:33:18.714 rmmod nvme_fabrics 00:33:18.714 rmmod nvme_keyring 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 990482 ']' 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 990482 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 990482 ']' 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 990482 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 990482 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 990482' 00:33:18.714 killing process with pid 990482 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 990482 00:33:18.714 09:09:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 990482 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:18.973 09:09:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:21.507 09:09:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:21.507 00:33:21.507 real 0m18.772s 00:33:21.507 user 0m36.729s 00:33:21.507 sys 0m6.666s 00:33:21.507 09:09:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:21.507 09:09:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.507 ************************************ 00:33:21.507 END TEST nvmf_interrupt 00:33:21.507 ************************************ 00:33:21.507 00:33:21.507 real 24m51.288s 00:33:21.507 user 58m8.182s 00:33:21.507 sys 6m41.669s 00:33:21.507 09:09:34 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:21.507 09:09:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.507 ************************************ 00:33:21.507 END TEST nvmf_tcp 00:33:21.507 ************************************ 00:33:21.507 09:09:34 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:33:21.507 09:09:34 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:21.507 09:09:34 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:21.507 09:09:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:21.507 09:09:34 -- common/autotest_common.sh@10 -- # set +x 00:33:21.507 ************************************ 00:33:21.507 START TEST spdkcli_nvmf_tcp 00:33:21.507 ************************************ 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:21.507 * Looking for test storage... 00:33:21.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1689 -- # lcov --version 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:33:21.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.507 --rc genhtml_branch_coverage=1 00:33:21.507 --rc genhtml_function_coverage=1 00:33:21.507 --rc genhtml_legend=1 00:33:21.507 --rc geninfo_all_blocks=1 00:33:21.507 --rc geninfo_unexecuted_blocks=1 00:33:21.507 00:33:21.507 ' 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:33:21.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.507 --rc genhtml_branch_coverage=1 00:33:21.507 --rc genhtml_function_coverage=1 00:33:21.507 --rc genhtml_legend=1 00:33:21.507 --rc geninfo_all_blocks=1 00:33:21.507 --rc geninfo_unexecuted_blocks=1 00:33:21.507 00:33:21.507 ' 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:33:21.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.507 --rc genhtml_branch_coverage=1 00:33:21.507 --rc genhtml_function_coverage=1 00:33:21.507 --rc genhtml_legend=1 00:33:21.507 --rc geninfo_all_blocks=1 00:33:21.507 --rc geninfo_unexecuted_blocks=1 00:33:21.507 00:33:21.507 ' 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:33:21.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:21.507 --rc genhtml_branch_coverage=1 00:33:21.507 --rc genhtml_function_coverage=1 00:33:21.507 --rc genhtml_legend=1 00:33:21.507 --rc geninfo_all_blocks=1 00:33:21.507 --rc geninfo_unexecuted_blocks=1 00:33:21.507 00:33:21.507 ' 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.507 09:09:34 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:21.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=992534 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 992534 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 992534 ']' 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.508 [2024-11-06 09:09:34.511910] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:33:21.508 [2024-11-06 09:09:34.511995] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992534 ] 00:33:21.508 [2024-11-06 09:09:34.578722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:21.508 [2024-11-06 09:09:34.640404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.508 [2024-11-06 09:09:34.640408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.508 09:09:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:21.508 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:21.508 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:21.508 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:21.508 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:21.508 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:21.508 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:21.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:21.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:21.508 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:21.508 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:21.508 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:21.508 ' 00:33:24.790 [2024-11-06 09:09:37.419977] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.720 [2024-11-06 09:09:38.692291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:28.248 [2024-11-06 09:09:41.035485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:30.144 [2024-11-06 09:09:43.037588] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:31.518 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:31.518 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:31.518 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:31.518 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:31.518 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:31.518 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:31.518 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:31.518 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:31.518 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:31.518 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:31.518 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:31.518 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:31.518 09:09:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:31.518 09:09:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:31.519 09:09:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:31.519 09:09:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:31.519 09:09:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:31.519 09:09:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:31.519 09:09:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:31.519 09:09:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:32.084 09:09:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:32.084 09:09:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:32.084 09:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:32.084 09:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:32.084 09:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.084 09:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:32.084 09:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:32.084 09:09:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.084 09:09:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:32.084 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:32.084 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:32.084 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:32.084 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:32.084 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:32.084 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:32.084 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:32.084 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:32.084 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:32.084 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:32.084 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:32.084 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:32.084 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:32.084 ' 00:33:37.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:37.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:37.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:37.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:37.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:37.347 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:37.347 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:37.347 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:37.347 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:37.347 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:37.347 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:37.347 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:37.347 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:37.347 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 992534 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 992534 ']' 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 992534 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 992534 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 992534' 00:33:37.605 killing process with pid 992534 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 992534 00:33:37.605 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 992534 00:33:37.865 09:09:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:37.865 09:09:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:37.865 09:09:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 992534 ']' 00:33:37.865 09:09:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 992534 00:33:37.865 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 992534 ']' 00:33:37.866 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 992534 00:33:37.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (992534) - No such process 00:33:37.866 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 992534 is not found' 00:33:37.866 Process with pid 992534 is not found 00:33:37.866 09:09:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:37.866 09:09:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:37.866 09:09:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:37.866 00:33:37.866 real 0m16.676s 00:33:37.866 user 0m35.581s 00:33:37.866 sys 0m0.755s 00:33:37.866 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:37.866 09:09:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:37.866 ************************************ 00:33:37.866 END TEST spdkcli_nvmf_tcp 00:33:37.866 ************************************ 00:33:37.866 09:09:50 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:37.866 09:09:50 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:37.866 09:09:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:37.866 09:09:50 -- common/autotest_common.sh@10 -- # set +x 00:33:37.866 ************************************ 00:33:37.866 START TEST nvmf_identify_passthru 00:33:37.866 ************************************ 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:37.866 * Looking for test storage... 00:33:37.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1689 -- # lcov --version 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:37.866 09:09:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:33:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.866 --rc genhtml_branch_coverage=1 00:33:37.866 --rc genhtml_function_coverage=1 00:33:37.866 --rc genhtml_legend=1 00:33:37.866 --rc geninfo_all_blocks=1 00:33:37.866 --rc geninfo_unexecuted_blocks=1 00:33:37.866 00:33:37.866 ' 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:33:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.866 --rc genhtml_branch_coverage=1 00:33:37.866 --rc genhtml_function_coverage=1 00:33:37.866 --rc genhtml_legend=1 00:33:37.866 --rc geninfo_all_blocks=1 00:33:37.866 --rc geninfo_unexecuted_blocks=1 00:33:37.866 00:33:37.866 ' 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:33:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.866 --rc genhtml_branch_coverage=1 00:33:37.866 --rc genhtml_function_coverage=1 00:33:37.866 --rc genhtml_legend=1 00:33:37.866 --rc geninfo_all_blocks=1 00:33:37.866 --rc geninfo_unexecuted_blocks=1 00:33:37.866 00:33:37.866 ' 00:33:37.866 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:33:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:37.866 --rc genhtml_branch_coverage=1 00:33:37.866 --rc genhtml_function_coverage=1 00:33:37.866 --rc genhtml_legend=1 00:33:37.866 --rc geninfo_all_blocks=1 00:33:37.866 --rc geninfo_unexecuted_blocks=1 00:33:37.866 00:33:37.866 ' 00:33:37.866 09:09:51 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:37.866 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:37.867 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:37.867 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:37.867 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:37.867 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:37.867 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:37.867 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:37.867 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.126 09:09:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.126 09:09:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.126 09:09:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.126 09:09:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.126 09:09:51 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:38.126 09:09:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.126 09:09:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.126 09:09:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.126 09:09:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:38.126 09:09:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.126 09:09:51 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:38.126 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:38.126 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:38.126 09:09:51 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:38.126 09:09:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.029 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:40.030 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:40.030 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:40.030 Found net devices under 0000:09:00.0: cvl_0_0 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:40.030 Found net devices under 0000:09:00.1: cvl_0_1 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:40.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:40.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:33:40.030 00:33:40.030 --- 10.0.0.2 ping statistics --- 00:33:40.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.030 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:40.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:40.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:33:40.030 00:33:40.030 --- 10.0.0.1 ping statistics --- 00:33:40.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.030 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:40.030 09:09:53 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:40.030 09:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.030 09:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1505 -- # bdfs=() 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1505 -- # local bdfs 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1506 -- # bdfs=($(get_nvme_bdfs)) 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1506 -- # get_nvme_bdfs 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1494 -- # bdfs=() 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1494 -- # local bdfs 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:40.030 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1495 -- # jq -r '.config[].params.traddr' 00:33:40.289 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # (( 1 == 0 )) 00:33:40.289 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # printf '%s\n' 0000:0b:00.0 00:33:40.289 09:09:53 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # echo 0000:0b:00.0 00:33:40.289 09:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:33:40.289 09:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:33:40.289 09:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:33:40.289 09:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:40.289 09:09:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:44.474 09:09:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:33:44.474 09:09:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:33:44.474 09:09:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:44.474 09:09:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:48.657 09:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:48.657 09:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.657 09:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.657 09:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=997164 00:33:48.657 09:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:48.657 09:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:48.657 09:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 997164 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 997164 ']' 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:48.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:48.657 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.657 [2024-11-06 09:10:01.695568] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:33:48.658 [2024-11-06 09:10:01.695662] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:48.658 [2024-11-06 09:10:01.771689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:48.658 [2024-11-06 09:10:01.832525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:48.658 [2024-11-06 09:10:01.832581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:48.658 [2024-11-06 09:10:01.832610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:48.658 [2024-11-06 09:10:01.832621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:48.658 [2024-11-06 09:10:01.832631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:48.658 [2024-11-06 09:10:01.834206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.658 [2024-11-06 09:10:01.834264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:48.658 [2024-11-06 09:10:01.834330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:48.658 [2024-11-06 09:10:01.834334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.658 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:48.658 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:33:48.658 09:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:48.658 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.658 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.658 INFO: Log level set to 20 00:33:48.658 INFO: Requests: 00:33:48.658 { 00:33:48.658 "jsonrpc": "2.0", 00:33:48.658 "method": "nvmf_set_config", 00:33:48.658 "id": 1, 00:33:48.658 "params": { 00:33:48.658 "admin_cmd_passthru": { 00:33:48.658 "identify_ctrlr": true 00:33:48.658 } 00:33:48.658 } 00:33:48.658 } 00:33:48.658 00:33:48.916 INFO: response: 00:33:48.916 { 00:33:48.916 "jsonrpc": "2.0", 00:33:48.916 "id": 1, 00:33:48.916 "result": true 00:33:48.916 } 00:33:48.916 00:33:48.916 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.916 09:10:01 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:48.916 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.916 09:10:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.916 INFO: Setting log level to 20 00:33:48.916 INFO: Setting log level to 20 00:33:48.916 INFO: Log level set to 20 00:33:48.916 INFO: Log level set to 20 00:33:48.916 INFO: Requests: 00:33:48.916 { 00:33:48.916 "jsonrpc": "2.0", 00:33:48.916 "method": "framework_start_init", 00:33:48.916 "id": 1 00:33:48.916 } 00:33:48.916 00:33:48.916 INFO: Requests: 00:33:48.916 { 00:33:48.916 "jsonrpc": "2.0", 00:33:48.916 "method": "framework_start_init", 00:33:48.916 "id": 1 00:33:48.916 } 00:33:48.916 00:33:48.916 [2024-11-06 09:10:02.043986] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:48.916 INFO: response: 00:33:48.916 { 00:33:48.916 "jsonrpc": "2.0", 00:33:48.916 "id": 1, 00:33:48.916 "result": true 00:33:48.916 } 00:33:48.916 00:33:48.916 INFO: response: 00:33:48.916 { 00:33:48.916 "jsonrpc": "2.0", 00:33:48.916 "id": 1, 00:33:48.916 "result": true 00:33:48.916 } 00:33:48.916 00:33:48.916 09:10:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.916 09:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:48.916 09:10:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.916 09:10:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.916 INFO: Setting log level to 40 00:33:48.916 INFO: Setting log level to 40 00:33:48.916 INFO: Setting log level to 40 00:33:48.916 [2024-11-06 09:10:02.054037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.916 09:10:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:48.916 09:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:48.916 09:10:02 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:48.916 09:10:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:48.916 09:10:02 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:33:48.916 09:10:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:48.916 09:10:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:52.194 Nvme0n1 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.194 09:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.194 09:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.194 09:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:52.194 [2024-11-06 09:10:04.953880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.194 09:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:52.194 [ 00:33:52.194 { 00:33:52.194 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:52.194 "subtype": "Discovery", 00:33:52.194 "listen_addresses": [], 00:33:52.194 "allow_any_host": true, 00:33:52.194 "hosts": [] 00:33:52.194 }, 00:33:52.194 { 00:33:52.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:52.194 "subtype": "NVMe", 00:33:52.194 "listen_addresses": [ 00:33:52.194 { 00:33:52.194 "trtype": "TCP", 00:33:52.194 "adrfam": "IPv4", 00:33:52.194 "traddr": "10.0.0.2", 00:33:52.194 "trsvcid": "4420" 00:33:52.194 } 00:33:52.194 ], 00:33:52.194 "allow_any_host": true, 00:33:52.194 "hosts": [], 00:33:52.194 "serial_number": "SPDK00000000000001", 00:33:52.194 "model_number": "SPDK bdev Controller", 00:33:52.194 "max_namespaces": 1, 00:33:52.194 "min_cntlid": 1, 00:33:52.194 "max_cntlid": 65519, 00:33:52.194 "namespaces": [ 00:33:52.194 { 00:33:52.194 "nsid": 1, 00:33:52.194 "bdev_name": "Nvme0n1", 00:33:52.194 "name": "Nvme0n1", 00:33:52.194 "nguid": "76A121851BC444939BFA263B2F39DD1B", 00:33:52.194 "uuid": "76a12185-1bc4-4493-9bfa-263b2f39dd1b" 00:33:52.194 } 00:33:52.194 ] 00:33:52.194 } 00:33:52.194 ] 00:33:52.194 09:10:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.194 09:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:52.194 09:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:52.194 09:10:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:52.194 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:33:52.194 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:52.194 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:52.194 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:52.194 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:52.194 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:33:52.194 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:52.194 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:52.194 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:52.194 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:52.451 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:52.451 09:10:05 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:52.451 rmmod nvme_tcp 00:33:52.451 rmmod nvme_fabrics 00:33:52.451 rmmod nvme_keyring 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 997164 ']' 00:33:52.451 09:10:05 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 997164 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 997164 ']' 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 997164 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 997164 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 997164' 00:33:52.451 killing process with pid 997164 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 997164 00:33:52.451 09:10:05 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 997164 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:53.826 09:10:07 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.826 09:10:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:53.826 09:10:07 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.463 09:10:09 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:56.463 00:33:56.463 real 0m18.079s 00:33:56.463 user 0m26.487s 00:33:56.463 sys 0m3.141s 00:33:56.463 09:10:09 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:56.463 09:10:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:56.463 ************************************ 00:33:56.463 END TEST nvmf_identify_passthru 00:33:56.463 ************************************ 00:33:56.463 09:10:09 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:56.463 09:10:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:56.463 09:10:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:56.463 09:10:09 -- common/autotest_common.sh@10 -- # set +x 00:33:56.463 ************************************ 00:33:56.463 START TEST nvmf_dif 00:33:56.463 ************************************ 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:56.463 * Looking for test storage... 00:33:56.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1689 -- # lcov --version 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:33:56.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.463 --rc genhtml_branch_coverage=1 00:33:56.463 --rc genhtml_function_coverage=1 00:33:56.463 --rc genhtml_legend=1 00:33:56.463 --rc geninfo_all_blocks=1 00:33:56.463 --rc geninfo_unexecuted_blocks=1 00:33:56.463 00:33:56.463 ' 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:33:56.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.463 --rc genhtml_branch_coverage=1 00:33:56.463 --rc genhtml_function_coverage=1 00:33:56.463 --rc genhtml_legend=1 00:33:56.463 --rc geninfo_all_blocks=1 00:33:56.463 --rc geninfo_unexecuted_blocks=1 00:33:56.463 00:33:56.463 ' 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:33:56.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.463 --rc genhtml_branch_coverage=1 00:33:56.463 --rc genhtml_function_coverage=1 00:33:56.463 --rc genhtml_legend=1 00:33:56.463 --rc geninfo_all_blocks=1 00:33:56.463 --rc geninfo_unexecuted_blocks=1 00:33:56.463 00:33:56.463 ' 00:33:56.463 09:10:09 nvmf_dif -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:33:56.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.463 --rc genhtml_branch_coverage=1 00:33:56.463 --rc genhtml_function_coverage=1 00:33:56.463 --rc genhtml_legend=1 00:33:56.463 --rc geninfo_all_blocks=1 00:33:56.463 --rc geninfo_unexecuted_blocks=1 00:33:56.463 00:33:56.463 ' 00:33:56.463 09:10:09 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.463 09:10:09 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.463 09:10:09 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.463 09:10:09 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.463 09:10:09 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.464 09:10:09 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.464 09:10:09 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:56.464 09:10:09 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:56.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.464 09:10:09 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:56.464 09:10:09 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:56.464 09:10:09 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:56.464 09:10:09 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:56.464 09:10:09 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.464 09:10:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:56.464 09:10:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:56.464 09:10:09 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.464 09:10:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:58.365 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:58.365 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:58.365 Found net devices under 0000:09:00.0: cvl_0_0 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:58.365 Found net devices under 0000:09:00.1: cvl_0_1 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:58.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:58.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:33:58.365 00:33:58.365 --- 10.0.0.2 ping statistics --- 00:33:58.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.365 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:33:58.365 09:10:11 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:58.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:58.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:33:58.366 00:33:58.366 --- 10.0.0.1 ping statistics --- 00:33:58.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:58.366 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:33:58.366 09:10:11 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:58.366 09:10:11 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:33:58.366 09:10:11 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:33:58.366 09:10:11 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:59.737 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:59.737 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:59.737 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:59.737 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:59.737 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:59.737 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:59.737 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:59.737 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:59.737 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:59.737 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:59.737 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:59.737 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:59.737 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:59.737 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:59.737 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:59.737 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:59.737 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:59.737 09:10:12 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:59.737 09:10:12 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:33:59.737 09:10:12 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:59.737 09:10:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=1000440 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:59.737 09:10:12 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 1000440 00:33:59.737 09:10:12 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1000440 ']' 00:33:59.737 09:10:12 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.737 09:10:12 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:59.737 09:10:12 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.737 09:10:12 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:59.737 09:10:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:59.737 [2024-11-06 09:10:12.887550] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:33:59.737 [2024-11-06 09:10:12.887635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.737 [2024-11-06 09:10:12.964587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:59.737 [2024-11-06 09:10:13.018488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.737 [2024-11-06 09:10:13.018543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.737 [2024-11-06 09:10:13.018566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.737 [2024-11-06 09:10:13.018577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.737 [2024-11-06 09:10:13.018587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.737 [2024-11-06 09:10:13.019128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:33:59.996 09:10:13 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:59.996 09:10:13 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.996 09:10:13 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:59.996 09:10:13 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:59.996 [2024-11-06 09:10:13.148616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.996 09:10:13 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:59.996 09:10:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:59.996 ************************************ 00:33:59.996 START TEST fio_dif_1_default 00:33:59.996 ************************************ 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:59.996 bdev_null0 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:59.996 [2024-11-06 09:10:13.204909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:33:59.996 { 00:33:59.996 "params": { 00:33:59.996 "name": "Nvme$subsystem", 00:33:59.996 "trtype": "$TEST_TRANSPORT", 00:33:59.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:59.996 "adrfam": "ipv4", 00:33:59.996 "trsvcid": "$NVMF_PORT", 00:33:59.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:59.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:59.996 "hdgst": ${hdgst:-false}, 00:33:59.996 "ddgst": ${ddgst:-false} 00:33:59.996 }, 00:33:59.996 "method": "bdev_nvme_attach_controller" 00:33:59.996 } 00:33:59.996 EOF 00:33:59.996 )") 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.996 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:33:59.997 "params": { 00:33:59.997 "name": "Nvme0", 00:33:59.997 "trtype": "tcp", 00:33:59.997 "traddr": "10.0.0.2", 00:33:59.997 "adrfam": "ipv4", 00:33:59.997 "trsvcid": "4420", 00:33:59.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:59.997 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:59.997 "hdgst": false, 00:33:59.997 "ddgst": false 00:33:59.997 }, 00:33:59.997 "method": "bdev_nvme_attach_controller" 00:33:59.997 }' 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:59.997 09:10:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:00.255 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:00.255 fio-3.35 00:34:00.255 Starting 1 thread 00:34:12.446 00:34:12.446 filename0: (groupid=0, jobs=1): err= 0: pid=1000665: Wed Nov 6 09:10:24 2024 00:34:12.446 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10009msec) 00:34:12.446 slat (nsec): min=6731, max=83499, avg=9056.81, stdev=4579.07 00:34:12.446 clat (usec): min=508, max=45879, avg=20954.53, stdev=20402.66 00:34:12.446 lat (usec): min=535, max=45921, avg=20963.59, stdev=20402.97 00:34:12.446 clat percentiles (usec): 00:34:12.446 | 1.00th=[ 545], 5.00th=[ 553], 10.00th=[ 570], 20.00th=[ 627], 00:34:12.446 | 30.00th=[ 742], 40.00th=[ 799], 50.00th=[ 881], 60.00th=[41157], 00:34:12.446 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:34:12.446 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:34:12.446 | 99.99th=[45876] 00:34:12.446 bw ( KiB/s): min= 704, max= 896, per=99.80%, avg=761.60, stdev=45.96, samples=20 00:34:12.446 iops : min= 176, max= 224, avg=190.40, stdev=11.49, samples=20 00:34:12.447 lat (usec) : 750=30.45%, 1000=19.86% 00:34:12.447 lat (msec) : 50=49.69% 00:34:12.447 cpu : usr=90.98%, sys=8.73%, ctx=25, majf=0, minf=207 00:34:12.447 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.447 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.447 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:12.447 00:34:12.447 Run status group 0 (all jobs): 00:34:12.447 READ: bw=763KiB/s (781kB/s), 763KiB/s-763KiB/s (781kB/s-781kB/s), io=7632KiB (7815kB), run=10009-10009msec 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 00:34:12.447 real 0m11.142s 00:34:12.447 user 0m10.253s 00:34:12.447 sys 0m1.135s 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 ************************************ 00:34:12.447 END TEST fio_dif_1_default 00:34:12.447 ************************************ 00:34:12.447 09:10:24 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:12.447 09:10:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:12.447 09:10:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 ************************************ 00:34:12.447 START TEST fio_dif_1_multi_subsystems 00:34:12.447 ************************************ 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 bdev_null0 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 [2024-11-06 09:10:24.401015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 bdev_null1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:12.447 { 00:34:12.447 "params": { 00:34:12.447 "name": "Nvme$subsystem", 00:34:12.447 "trtype": "$TEST_TRANSPORT", 00:34:12.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.447 "adrfam": "ipv4", 00:34:12.447 "trsvcid": "$NVMF_PORT", 00:34:12.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.447 "hdgst": ${hdgst:-false}, 00:34:12.447 "ddgst": ${ddgst:-false} 00:34:12.447 }, 00:34:12.447 "method": "bdev_nvme_attach_controller" 00:34:12.447 } 00:34:12.447 EOF 00:34:12.447 )") 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.447 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:12.448 { 00:34:12.448 "params": { 00:34:12.448 "name": "Nvme$subsystem", 00:34:12.448 "trtype": "$TEST_TRANSPORT", 00:34:12.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.448 "adrfam": "ipv4", 00:34:12.448 "trsvcid": "$NVMF_PORT", 00:34:12.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.448 "hdgst": ${hdgst:-false}, 00:34:12.448 "ddgst": ${ddgst:-false} 00:34:12.448 }, 00:34:12.448 "method": "bdev_nvme_attach_controller" 00:34:12.448 } 00:34:12.448 EOF 00:34:12.448 )") 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:12.448 "params": { 00:34:12.448 "name": "Nvme0", 00:34:12.448 "trtype": "tcp", 00:34:12.448 "traddr": "10.0.0.2", 00:34:12.448 "adrfam": "ipv4", 00:34:12.448 "trsvcid": "4420", 00:34:12.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:12.448 "hdgst": false, 00:34:12.448 "ddgst": false 00:34:12.448 }, 00:34:12.448 "method": "bdev_nvme_attach_controller" 00:34:12.448 },{ 00:34:12.448 "params": { 00:34:12.448 "name": "Nvme1", 00:34:12.448 "trtype": "tcp", 00:34:12.448 "traddr": "10.0.0.2", 00:34:12.448 "adrfam": "ipv4", 00:34:12.448 "trsvcid": "4420", 00:34:12.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:12.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:12.448 "hdgst": false, 00:34:12.448 "ddgst": false 00:34:12.448 }, 00:34:12.448 "method": "bdev_nvme_attach_controller" 00:34:12.448 }' 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:12.448 09:10:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.448 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:12.448 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:12.448 fio-3.35 00:34:12.448 Starting 2 threads 00:34:22.413 00:34:22.413 filename0: (groupid=0, jobs=1): err= 0: pid=1002068: Wed Nov 6 09:10:35 2024 00:34:22.413 read: IOPS=106, BW=426KiB/s (436kB/s)(4256KiB/10001msec) 00:34:22.413 slat (nsec): min=6766, max=58025, avg=10448.14, stdev=5409.42 00:34:22.413 clat (usec): min=575, max=42889, avg=37561.88, stdev=11369.72 00:34:22.413 lat (usec): min=583, max=42908, avg=37572.33, stdev=11369.56 00:34:22.413 clat percentiles (usec): 00:34:22.413 | 1.00th=[ 586], 5.00th=[ 627], 10.00th=[40633], 20.00th=[41157], 00:34:22.413 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:22.413 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:22.413 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:22.413 | 99.99th=[42730] 00:34:22.413 bw ( KiB/s): min= 384, max= 480, per=46.38%, avg=422.74, stdev=25.19, samples=19 00:34:22.413 iops : min= 96, max= 120, avg=105.68, stdev= 6.30, samples=19 00:34:22.413 lat (usec) : 750=8.27%, 1000=0.38% 00:34:22.413 lat (msec) : 50=91.35% 00:34:22.413 cpu : usr=97.63%, sys=2.08%, ctx=18, majf=0, minf=136 00:34:22.413 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.413 issued rwts: total=1064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.413 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:22.413 filename1: (groupid=0, jobs=1): err= 0: pid=1002069: Wed Nov 6 09:10:35 2024 00:34:22.413 read: IOPS=121, BW=486KiB/s (498kB/s)(4880KiB/10041msec) 00:34:22.413 slat (nsec): min=4505, max=63364, avg=11069.03, stdev=5085.38 00:34:22.413 clat (usec): min=569, max=42850, avg=32883.73, stdev=16281.90 00:34:22.413 lat (usec): min=577, max=42872, avg=32894.79, stdev=16281.95 00:34:22.413 clat percentiles (usec): 00:34:22.413 | 1.00th=[ 603], 5.00th=[ 635], 10.00th=[ 668], 20.00th=[ 766], 00:34:22.413 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:22.413 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:22.413 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:22.413 | 99.99th=[42730] 00:34:22.413 bw ( KiB/s): min= 416, max= 574, per=53.41%, avg=486.30, stdev=45.76, samples=20 00:34:22.413 iops : min= 104, max= 143, avg=121.55, stdev=11.39, samples=20 00:34:22.413 lat (usec) : 750=19.18%, 1000=1.15% 00:34:22.413 lat (msec) : 50=79.67% 00:34:22.413 cpu : usr=97.23%, sys=2.44%, ctx=14, majf=0, minf=167 00:34:22.413 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:22.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:22.413 issued rwts: total=1220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:22.413 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:22.413 00:34:22.413 Run status group 0 (all jobs): 00:34:22.413 READ: bw=910KiB/s (932kB/s), 426KiB/s-486KiB/s (436kB/s-498kB/s), io=9136KiB (9355kB), run=10001-10041msec 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.672 00:34:22.672 real 0m11.461s 00:34:22.672 user 0m20.980s 00:34:22.672 sys 0m0.792s 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 ************************************ 00:34:22.672 END TEST fio_dif_1_multi_subsystems 00:34:22.672 ************************************ 00:34:22.672 09:10:35 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:22.672 09:10:35 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:22.672 09:10:35 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 ************************************ 00:34:22.672 START TEST fio_dif_rand_params 00:34:22.672 ************************************ 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 bdev_null0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:22.672 [2024-11-06 09:10:35.907130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:22.672 09:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:22.672 { 00:34:22.672 "params": { 00:34:22.672 "name": "Nvme$subsystem", 00:34:22.672 "trtype": "$TEST_TRANSPORT", 00:34:22.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.672 "adrfam": "ipv4", 00:34:22.672 "trsvcid": "$NVMF_PORT", 00:34:22.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.672 "hdgst": ${hdgst:-false}, 00:34:22.672 "ddgst": ${ddgst:-false} 00:34:22.672 }, 00:34:22.672 "method": "bdev_nvme_attach_controller" 00:34:22.672 } 00:34:22.672 EOF 00:34:22.672 )") 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:22.673 "params": { 00:34:22.673 "name": "Nvme0", 00:34:22.673 "trtype": "tcp", 00:34:22.673 "traddr": "10.0.0.2", 00:34:22.673 "adrfam": "ipv4", 00:34:22.673 "trsvcid": "4420", 00:34:22.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:22.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:22.673 "hdgst": false, 00:34:22.673 "ddgst": false 00:34:22.673 }, 00:34:22.673 "method": "bdev_nvme_attach_controller" 00:34:22.673 }' 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:22.673 09:10:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.931 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:22.931 ... 00:34:22.931 fio-3.35 00:34:22.931 Starting 3 threads 00:34:29.487 00:34:29.487 filename0: (groupid=0, jobs=1): err= 0: pid=1003465: Wed Nov 6 09:10:41 2024 00:34:29.487 read: IOPS=229, BW=28.7MiB/s (30.1MB/s)(145MiB/5043msec) 00:34:29.487 slat (nsec): min=5180, max=76914, avg=15772.92, stdev=5172.40 00:34:29.487 clat (usec): min=6666, max=53173, avg=13020.51, stdev=3230.52 00:34:29.487 lat (usec): min=6675, max=53192, avg=13036.28, stdev=3230.61 00:34:29.487 clat percentiles (usec): 00:34:29.487 | 1.00th=[ 7767], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11338], 00:34:29.487 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12780], 60.00th=[13304], 00:34:29.487 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15401], 95.00th=[16188], 00:34:29.487 | 99.00th=[17433], 99.50th=[22414], 99.90th=[52691], 99.95th=[53216], 00:34:29.487 | 99.99th=[53216] 00:34:29.487 bw ( KiB/s): min=26368, max=32320, per=33.97%, avg=29574.40, stdev=1748.41, samples=10 00:34:29.487 iops : min= 206, max= 252, avg=231.00, stdev=13.57, samples=10 00:34:29.487 lat (msec) : 10=5.70%, 20=93.60%, 50=0.35%, 100=0.35% 00:34:29.487 cpu : usr=93.81%, sys=5.67%, ctx=11, majf=0, minf=158 00:34:29.487 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.487 issued rwts: total=1157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.487 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:29.487 filename0: (groupid=0, jobs=1): err= 0: pid=1003466: Wed Nov 6 09:10:41 2024 00:34:29.487 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(141MiB/5044msec) 00:34:29.487 slat (nsec): min=5145, max=49594, avg=15961.49, stdev=4983.68 00:34:29.487 clat (usec): min=7187, max=53402, avg=13369.81, stdev=4293.71 00:34:29.487 lat (usec): min=7199, max=53414, avg=13385.77, stdev=4293.51 00:34:29.487 clat percentiles (usec): 00:34:29.487 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10552], 20.00th=[11207], 00:34:29.487 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12911], 60.00th=[13435], 00:34:29.487 | 70.00th=[14091], 80.00th=[14877], 90.00th=[15664], 95.00th=[16319], 00:34:29.487 | 99.00th=[17957], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:34:29.487 | 99.99th=[53216] 00:34:29.487 bw ( KiB/s): min=24576, max=30464, per=33.08%, avg=28800.00, stdev=1943.09, samples=10 00:34:29.487 iops : min= 192, max= 238, avg=225.00, stdev=15.18, samples=10 00:34:29.487 lat (msec) : 10=3.99%, 20=95.03%, 100=0.98% 00:34:29.487 cpu : usr=93.65%, sys=5.85%, ctx=9, majf=0, minf=78 00:34:29.487 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.487 issued rwts: total=1127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.487 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:29.487 filename0: (groupid=0, jobs=1): err= 0: pid=1003467: Wed Nov 6 09:10:41 2024 00:34:29.487 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(143MiB/5044msec) 00:34:29.487 slat (nsec): min=5044, max=62761, avg=20918.76, stdev=7154.14 00:34:29.487 clat (usec): min=7048, max=53470, avg=13131.81, stdev=4109.34 00:34:29.487 lat (usec): min=7060, max=53483, avg=13152.73, stdev=4108.92 00:34:29.487 clat percentiles (usec): 00:34:29.487 | 1.00th=[ 7767], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11469], 00:34:29.487 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:34:29.487 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15008], 95.00th=[15533], 00:34:29.487 | 99.00th=[17695], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:34:29.487 | 99.99th=[53216] 00:34:29.487 bw ( KiB/s): min=26368, max=31744, per=33.64%, avg=29292.00, stdev=1643.54, samples=10 00:34:29.487 iops : min= 206, max= 248, avg=228.80, stdev=12.87, samples=10 00:34:29.487 lat (msec) : 10=5.93%, 20=93.11%, 50=0.17%, 100=0.78% 00:34:29.487 cpu : usr=87.96%, sys=8.51%, ctx=491, majf=0, minf=125 00:34:29.487 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:29.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:29.487 issued rwts: total=1147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:29.487 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:29.487 00:34:29.487 Run status group 0 (all jobs): 00:34:29.487 READ: bw=85.0MiB/s (89.2MB/s), 27.9MiB/s-28.7MiB/s (29.3MB/s-30.1MB/s), io=429MiB (450MB), run=5043-5044msec 00:34:29.487 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:29.487 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:29.487 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:29.487 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 bdev_null0 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 [2024-11-06 09:10:42.268402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 bdev_null1 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 bdev_null2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:29.488 { 00:34:29.488 "params": { 00:34:29.488 "name": "Nvme$subsystem", 00:34:29.488 "trtype": "$TEST_TRANSPORT", 00:34:29.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.488 "adrfam": "ipv4", 00:34:29.488 "trsvcid": "$NVMF_PORT", 00:34:29.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.488 "hdgst": ${hdgst:-false}, 00:34:29.488 "ddgst": ${ddgst:-false} 00:34:29.488 }, 00:34:29.488 "method": "bdev_nvme_attach_controller" 00:34:29.488 } 00:34:29.488 EOF 00:34:29.488 )") 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.488 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:29.489 { 00:34:29.489 "params": { 00:34:29.489 "name": "Nvme$subsystem", 00:34:29.489 "trtype": "$TEST_TRANSPORT", 00:34:29.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.489 "adrfam": "ipv4", 00:34:29.489 "trsvcid": "$NVMF_PORT", 00:34:29.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.489 "hdgst": ${hdgst:-false}, 00:34:29.489 "ddgst": ${ddgst:-false} 00:34:29.489 }, 00:34:29.489 "method": "bdev_nvme_attach_controller" 00:34:29.489 } 00:34:29.489 EOF 00:34:29.489 )") 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:29.489 { 00:34:29.489 "params": { 00:34:29.489 "name": "Nvme$subsystem", 00:34:29.489 "trtype": "$TEST_TRANSPORT", 00:34:29.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.489 "adrfam": "ipv4", 00:34:29.489 "trsvcid": "$NVMF_PORT", 00:34:29.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.489 "hdgst": ${hdgst:-false}, 00:34:29.489 "ddgst": ${ddgst:-false} 00:34:29.489 }, 00:34:29.489 "method": "bdev_nvme_attach_controller" 00:34:29.489 } 00:34:29.489 EOF 00:34:29.489 )") 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:29.489 "params": { 00:34:29.489 "name": "Nvme0", 00:34:29.489 "trtype": "tcp", 00:34:29.489 "traddr": "10.0.0.2", 00:34:29.489 "adrfam": "ipv4", 00:34:29.489 "trsvcid": "4420", 00:34:29.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.489 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.489 "hdgst": false, 00:34:29.489 "ddgst": false 00:34:29.489 }, 00:34:29.489 "method": "bdev_nvme_attach_controller" 00:34:29.489 },{ 00:34:29.489 "params": { 00:34:29.489 "name": "Nvme1", 00:34:29.489 "trtype": "tcp", 00:34:29.489 "traddr": "10.0.0.2", 00:34:29.489 "adrfam": "ipv4", 00:34:29.489 "trsvcid": "4420", 00:34:29.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:29.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:29.489 "hdgst": false, 00:34:29.489 "ddgst": false 00:34:29.489 }, 00:34:29.489 "method": "bdev_nvme_attach_controller" 00:34:29.489 },{ 00:34:29.489 "params": { 00:34:29.489 "name": "Nvme2", 00:34:29.489 "trtype": "tcp", 00:34:29.489 "traddr": "10.0.0.2", 00:34:29.489 "adrfam": "ipv4", 00:34:29.489 "trsvcid": "4420", 00:34:29.489 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:29.489 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:29.489 "hdgst": false, 00:34:29.489 "ddgst": false 00:34:29.489 }, 00:34:29.489 "method": "bdev_nvme_attach_controller" 00:34:29.489 }' 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.489 09:10:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.489 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:29.489 ... 00:34:29.489 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:29.489 ... 00:34:29.489 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:29.489 ... 00:34:29.489 fio-3.35 00:34:29.489 Starting 24 threads 00:34:41.689 00:34:41.689 filename0: (groupid=0, jobs=1): err= 0: pid=1004330: Wed Nov 6 09:10:53 2024 00:34:41.689 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10005msec) 00:34:41.689 slat (usec): min=6, max=207, avg=28.46, stdev=29.44 00:34:41.689 clat (usec): min=5901, max=59281, avg=34862.38, stdev=5136.18 00:34:41.689 lat (usec): min=5913, max=59291, avg=34890.84, stdev=5128.66 00:34:41.689 clat percentiles (usec): 00:34:41.689 | 1.00th=[15795], 5.00th=[32637], 10.00th=[32900], 20.00th=[33424], 00:34:41.689 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:41.689 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:34:41.689 | 99.00th=[46924], 99.50th=[47973], 99.90th=[50070], 99.95th=[58459], 00:34:41.689 | 99.99th=[59507] 00:34:41.689 bw ( KiB/s): min= 1408, max= 2176, per=4.20%, avg=1818.95, stdev=188.79, samples=19 00:34:41.689 iops : min= 352, max= 544, avg=454.74, stdev=47.20, samples=19 00:34:41.689 lat (msec) : 10=0.35%, 20=2.28%, 50=97.24%, 100=0.13% 00:34:41.689 cpu : usr=98.34%, sys=1.25%, ctx=20, majf=0, minf=33 00:34:41.689 IO depths : 1=5.5%, 2=11.8%, 4=24.9%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:34:41.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.689 filename0: (groupid=0, jobs=1): err= 0: pid=1004331: Wed Nov 6 09:10:53 2024 00:34:41.689 read: IOPS=451, BW=1807KiB/s (1851kB/s)(17.7MiB/10022msec) 00:34:41.689 slat (nsec): min=8121, max=74783, avg=18855.49, stdev=12429.64 00:34:41.689 clat (usec): min=21374, max=47377, avg=35238.28, stdev=3752.44 00:34:41.689 lat (usec): min=21424, max=47410, avg=35257.13, stdev=3749.61 00:34:41.689 clat percentiles (usec): 00:34:41.689 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:34:41.689 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:41.689 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:34:41.689 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:34:41.689 | 99.99th=[47449] 00:34:41.689 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1804.95, stdev=165.58, samples=20 00:34:41.689 iops : min= 352, max= 480, avg=451.20, stdev=41.40, samples=20 00:34:41.689 lat (msec) : 50=100.00% 00:34:41.689 cpu : usr=97.98%, sys=1.60%, ctx=13, majf=0, minf=30 00:34:41.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.689 filename0: (groupid=0, jobs=1): err= 0: pid=1004332: Wed Nov 6 09:10:53 2024 00:34:41.689 read: IOPS=453, BW=1816KiB/s (1859kB/s)(17.8MiB/10011msec) 00:34:41.689 slat (usec): min=10, max=119, avg=45.58, stdev=16.71 00:34:41.689 clat (usec): min=10612, max=44696, avg=34853.55, stdev=4043.83 00:34:41.689 lat (usec): min=10667, max=44718, avg=34899.12, stdev=4041.69 00:34:41.689 clat percentiles (usec): 00:34:41.689 | 1.00th=[22676], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:34:41.689 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:41.689 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43779], 00:34:41.689 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44827], 00:34:41.689 | 99.99th=[44827] 00:34:41.689 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1811.20, stdev=177.53, samples=20 00:34:41.689 iops : min= 352, max= 512, avg=452.80, stdev=44.38, samples=20 00:34:41.689 lat (msec) : 20=0.92%, 50=99.08% 00:34:41.689 cpu : usr=97.94%, sys=1.41%, ctx=85, majf=0, minf=40 00:34:41.689 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.689 filename0: (groupid=0, jobs=1): err= 0: pid=1004333: Wed Nov 6 09:10:53 2024 00:34:41.689 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.7MiB/10007msec) 00:34:41.689 slat (usec): min=10, max=122, avg=58.36, stdev=21.64 00:34:41.689 clat (usec): min=19185, max=44695, avg=34835.97, stdev=3643.67 00:34:41.689 lat (usec): min=19242, max=44737, avg=34894.33, stdev=3652.46 00:34:41.689 clat percentiles (usec): 00:34:41.689 | 1.00th=[31065], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:34:41.689 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:41.689 | 70.00th=[33817], 80.00th=[34341], 90.00th=[42730], 95.00th=[43254], 00:34:41.689 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44303], 99.95th=[44827], 00:34:41.689 | 99.99th=[44827] 00:34:41.689 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1804.80, stdev=170.72, samples=20 00:34:41.689 iops : min= 352, max= 480, avg=451.20, stdev=42.68, samples=20 00:34:41.689 lat (msec) : 20=0.31%, 50=99.69% 00:34:41.689 cpu : usr=98.28%, sys=1.29%, ctx=18, majf=0, minf=34 00:34:41.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.689 filename0: (groupid=0, jobs=1): err= 0: pid=1004334: Wed Nov 6 09:10:53 2024 00:34:41.689 read: IOPS=451, BW=1804KiB/s (1847kB/s)(17.6MiB/10004msec) 00:34:41.689 slat (usec): min=5, max=145, avg=74.82, stdev=14.91 00:34:41.689 clat (usec): min=22016, max=54994, avg=34801.77, stdev=3849.90 00:34:41.689 lat (usec): min=22042, max=55009, avg=34876.60, stdev=3851.36 00:34:41.689 clat percentiles (usec): 00:34:41.689 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:34:41.689 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:34:41.689 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43254], 00:34:41.689 | 99.00th=[43779], 99.50th=[44303], 99.90th=[54789], 99.95th=[54789], 00:34:41.689 | 99.99th=[54789] 00:34:41.689 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1792.00, stdev=159.64, samples=19 00:34:41.689 iops : min= 352, max= 480, avg=448.00, stdev=39.91, samples=19 00:34:41.689 lat (msec) : 50=99.65%, 100=0.35% 00:34:41.689 cpu : usr=98.37%, sys=1.17%, ctx=14, majf=0, minf=26 00:34:41.689 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:41.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.689 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.689 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.689 filename0: (groupid=0, jobs=1): err= 0: pid=1004335: Wed Nov 6 09:10:53 2024 00:34:41.689 read: IOPS=451, BW=1804KiB/s (1847kB/s)(17.6MiB/10004msec) 00:34:41.689 slat (usec): min=8, max=119, avg=38.56, stdev=15.05 00:34:41.689 clat (usec): min=22025, max=54678, avg=35110.53, stdev=3865.11 00:34:41.689 lat (usec): min=22038, max=54719, avg=35149.10, stdev=3863.61 00:34:41.689 clat percentiles (usec): 00:34:41.689 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:41.689 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.689 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43779], 00:34:41.689 | 99.00th=[44303], 99.50th=[44303], 99.90th=[54789], 99.95th=[54789], 00:34:41.689 | 99.99th=[54789] 00:34:41.690 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1792.16, stdev=159.51, samples=19 00:34:41.690 iops : min= 352, max= 480, avg=448.00, stdev=39.91, samples=19 00:34:41.690 lat (msec) : 50=99.65%, 100=0.35% 00:34:41.690 cpu : usr=98.09%, sys=1.33%, ctx=88, majf=0, minf=29 00:34:41.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.690 filename0: (groupid=0, jobs=1): err= 0: pid=1004336: Wed Nov 6 09:10:53 2024 00:34:41.690 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.6MiB/10007msec) 00:34:41.690 slat (usec): min=8, max=116, avg=72.01, stdev=14.60 00:34:41.690 clat (usec): min=18330, max=60644, avg=34835.44, stdev=4020.47 00:34:41.690 lat (usec): min=18345, max=60673, avg=34907.45, stdev=4020.49 00:34:41.690 clat percentiles (usec): 00:34:41.690 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:34:41.690 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:34:41.690 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43254], 00:34:41.690 | 99.00th=[43779], 99.50th=[44303], 99.90th=[60556], 99.95th=[60556], 00:34:41.690 | 99.99th=[60556] 00:34:41.690 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1792.16, stdev=170.54, samples=19 00:34:41.690 iops : min= 352, max= 480, avg=448.00, stdev=42.67, samples=19 00:34:41.690 lat (msec) : 20=0.35%, 50=99.29%, 100=0.35% 00:34:41.690 cpu : usr=98.20%, sys=1.34%, ctx=15, majf=0, minf=28 00:34:41.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:41.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.690 filename0: (groupid=0, jobs=1): err= 0: pid=1004337: Wed Nov 6 09:10:53 2024 00:34:41.690 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.6MiB/10005msec) 00:34:41.690 slat (nsec): min=8194, max=94604, avg=31482.42, stdev=12945.18 00:34:41.690 clat (usec): min=18322, max=73408, avg=35217.22, stdev=4187.59 00:34:41.690 lat (usec): min=18337, max=73429, avg=35248.70, stdev=4186.97 00:34:41.690 clat percentiles (usec): 00:34:41.690 | 1.00th=[32375], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:41.690 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.690 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:34:41.690 | 99.00th=[44303], 99.50th=[47973], 99.90th=[58459], 99.95th=[58459], 00:34:41.690 | 99.99th=[73925] 00:34:41.690 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1792.00, stdev=157.76, samples=19 00:34:41.690 iops : min= 352, max= 480, avg=448.00, stdev=39.44, samples=19 00:34:41.690 lat (msec) : 20=0.64%, 50=99.00%, 100=0.35% 00:34:41.690 cpu : usr=98.49%, sys=1.09%, ctx=16, majf=0, minf=45 00:34:41.690 IO depths : 1=4.2%, 2=10.4%, 4=24.9%, 8=52.2%, 16=8.3%, 32=0.0%, >=64=0.0% 00:34:41.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.690 filename1: (groupid=0, jobs=1): err= 0: pid=1004338: Wed Nov 6 09:10:53 2024 00:34:41.690 read: IOPS=451, BW=1808KiB/s (1851kB/s)(17.7MiB/10018msec) 00:34:41.690 slat (nsec): min=11256, max=86208, avg=39053.85, stdev=13216.96 00:34:41.690 clat (usec): min=22032, max=51462, avg=35033.54, stdev=3693.61 00:34:41.690 lat (usec): min=22049, max=51502, avg=35072.60, stdev=3693.80 00:34:41.690 clat percentiles (usec): 00:34:41.690 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:41.690 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:41.690 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43779], 00:34:41.690 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:34:41.690 | 99.99th=[51643] 00:34:41.690 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1804.80, stdev=165.59, samples=20 00:34:41.690 iops : min= 352, max= 480, avg=451.20, stdev=41.40, samples=20 00:34:41.690 lat (msec) : 50=99.96%, 100=0.04% 00:34:41.690 cpu : usr=98.05%, sys=1.33%, ctx=65, majf=0, minf=27 00:34:41.690 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.690 filename1: (groupid=0, jobs=1): err= 0: pid=1004339: Wed Nov 6 09:10:53 2024 00:34:41.690 read: IOPS=453, BW=1813KiB/s (1856kB/s)(17.8MiB/10026msec) 00:34:41.690 slat (usec): min=8, max=128, avg=38.78, stdev=18.75 00:34:41.690 clat (usec): min=15644, max=44641, avg=34987.95, stdev=3872.61 00:34:41.690 lat (usec): min=15707, max=44668, avg=35026.74, stdev=3870.78 00:34:41.690 clat percentiles (usec): 00:34:41.690 | 1.00th=[30540], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:41.690 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.690 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43779], 95.00th=[43779], 00:34:41.690 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44827], 00:34:41.690 | 99.99th=[44827] 00:34:41.690 bw ( KiB/s): min= 1408, max= 1920, per=4.18%, avg=1811.20, stdev=156.90, samples=20 00:34:41.690 iops : min= 352, max= 480, avg=452.80, stdev=39.23, samples=20 00:34:41.690 lat (msec) : 20=0.64%, 50=99.36% 00:34:41.690 cpu : usr=98.23%, sys=1.32%, ctx=36, majf=0, minf=28 00:34:41.690 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.690 filename1: (groupid=0, jobs=1): err= 0: pid=1004340: Wed Nov 6 09:10:53 2024 00:34:41.690 read: IOPS=451, BW=1807KiB/s (1851kB/s)(17.7MiB/10022msec) 00:34:41.690 slat (nsec): min=8522, max=76001, avg=31609.88, stdev=11642.60 00:34:41.690 clat (usec): min=23728, max=51727, avg=35155.81, stdev=3772.95 00:34:41.690 lat (usec): min=23758, max=51753, avg=35187.42, stdev=3769.48 00:34:41.690 clat percentiles (usec): 00:34:41.690 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:41.690 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.690 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43779], 95.00th=[43779], 00:34:41.690 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:34:41.690 | 99.99th=[51643] 00:34:41.690 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1804.95, stdev=165.58, samples=20 00:34:41.690 iops : min= 352, max= 480, avg=451.20, stdev=41.40, samples=20 00:34:41.690 lat (msec) : 50=99.96%, 100=0.04% 00:34:41.690 cpu : usr=97.99%, sys=1.38%, ctx=82, majf=0, minf=32 00:34:41.690 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.690 filename1: (groupid=0, jobs=1): err= 0: pid=1004341: Wed Nov 6 09:10:53 2024 00:34:41.690 read: IOPS=452, BW=1810KiB/s (1854kB/s)(17.7MiB/10006msec) 00:34:41.690 slat (nsec): min=10182, max=87228, avg=40145.87, stdev=12544.67 00:34:41.690 clat (usec): min=19377, max=44691, avg=34997.69, stdev=3732.75 00:34:41.690 lat (usec): min=19413, max=44716, avg=35037.84, stdev=3732.33 00:34:41.690 clat percentiles (usec): 00:34:41.690 | 1.00th=[31065], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:41.690 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:41.690 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43779], 00:34:41.690 | 99.00th=[43779], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:34:41.690 | 99.99th=[44827] 00:34:41.690 bw ( KiB/s): min= 1408, max= 1923, per=4.16%, avg=1804.95, stdev=170.83, samples=20 00:34:41.690 iops : min= 352, max= 480, avg=451.20, stdev=42.68, samples=20 00:34:41.690 lat (msec) : 20=0.35%, 50=99.65% 00:34:41.690 cpu : usr=98.35%, sys=1.25%, ctx=17, majf=0, minf=27 00:34:41.690 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:41.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.690 filename1: (groupid=0, jobs=1): err= 0: pid=1004342: Wed Nov 6 09:10:53 2024 00:34:41.690 read: IOPS=453, BW=1816KiB/s (1859kB/s)(17.8MiB/10011msec) 00:34:41.690 slat (nsec): min=10617, max=95465, avg=39224.78, stdev=12495.24 00:34:41.690 clat (usec): min=15164, max=44720, avg=34910.38, stdev=4053.00 00:34:41.690 lat (usec): min=15214, max=44751, avg=34949.61, stdev=4051.99 00:34:41.690 clat percentiles (usec): 00:34:41.690 | 1.00th=[19792], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:41.690 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:41.690 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43779], 00:34:41.690 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:34:41.690 | 99.99th=[44827] 00:34:41.690 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1811.20, stdev=177.53, samples=20 00:34:41.690 iops : min= 352, max= 512, avg=452.80, stdev=44.38, samples=20 00:34:41.690 lat (msec) : 20=1.06%, 50=98.94% 00:34:41.690 cpu : usr=97.85%, sys=1.47%, ctx=53, majf=0, minf=25 00:34:41.690 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.690 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.690 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.690 filename1: (groupid=0, jobs=1): err= 0: pid=1004343: Wed Nov 6 09:10:53 2024 00:34:41.690 read: IOPS=451, BW=1805KiB/s (1848kB/s)(17.6MiB/10001msec) 00:34:41.690 slat (nsec): min=5234, max=89792, avg=36964.41, stdev=14370.77 00:34:41.690 clat (usec): min=22018, max=51751, avg=35110.65, stdev=3793.70 00:34:41.690 lat (usec): min=22028, max=51766, avg=35147.62, stdev=3793.38 00:34:41.691 clat percentiles (usec): 00:34:41.691 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:41.691 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.691 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43254], 95.00th=[43779], 00:34:41.691 | 99.00th=[44303], 99.50th=[44303], 99.90th=[51643], 99.95th=[51643], 00:34:41.691 | 99.99th=[51643] 00:34:41.691 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1792.00, stdev=175.92, samples=19 00:34:41.691 iops : min= 352, max= 480, avg=448.00, stdev=43.98, samples=19 00:34:41.691 lat (msec) : 50=99.65%, 100=0.35% 00:34:41.691 cpu : usr=98.41%, sys=1.17%, ctx=17, majf=0, minf=32 00:34:41.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:41.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.691 filename1: (groupid=0, jobs=1): err= 0: pid=1004344: Wed Nov 6 09:10:53 2024 00:34:41.691 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.6MiB/10005msec) 00:34:41.691 slat (nsec): min=8289, max=73571, avg=30450.94, stdev=11124.91 00:34:41.691 clat (usec): min=13273, max=58220, avg=35198.68, stdev=4006.13 00:34:41.691 lat (usec): min=13283, max=58234, avg=35229.14, stdev=4006.01 00:34:41.691 clat percentiles (usec): 00:34:41.691 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:41.691 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.691 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:34:41.691 | 99.00th=[44303], 99.50th=[44303], 99.90th=[57934], 99.95th=[57934], 00:34:41.691 | 99.99th=[58459] 00:34:41.691 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1792.00, stdev=159.02, samples=19 00:34:41.691 iops : min= 352, max= 480, avg=448.00, stdev=39.75, samples=19 00:34:41.691 lat (msec) : 20=0.40%, 50=99.25%, 100=0.35% 00:34:41.691 cpu : usr=96.97%, sys=1.89%, ctx=208, majf=0, minf=26 00:34:41.691 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:41.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.691 filename1: (groupid=0, jobs=1): err= 0: pid=1004345: Wed Nov 6 09:10:53 2024 00:34:41.691 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.6MiB/10006msec) 00:34:41.691 slat (nsec): min=6440, max=79235, avg=34059.11, stdev=9905.59 00:34:41.691 clat (usec): min=12656, max=73616, avg=35177.37, stdev=4305.78 00:34:41.691 lat (usec): min=12680, max=73642, avg=35211.43, stdev=4305.04 00:34:41.691 clat percentiles (usec): 00:34:41.691 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33162], 00:34:41.691 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.691 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:34:41.691 | 99.00th=[44303], 99.50th=[50594], 99.90th=[58459], 99.95th=[58459], 00:34:41.691 | 99.99th=[73925] 00:34:41.691 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1792.00, stdev=170.67, samples=19 00:34:41.691 iops : min= 352, max= 480, avg=448.00, stdev=42.67, samples=19 00:34:41.691 lat (msec) : 20=0.35%, 50=99.14%, 100=0.51% 00:34:41.691 cpu : usr=98.08%, sys=1.52%, ctx=20, majf=0, minf=28 00:34:41.691 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:41.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.691 filename2: (groupid=0, jobs=1): err= 0: pid=1004346: Wed Nov 6 09:10:53 2024 00:34:41.691 read: IOPS=453, BW=1816KiB/s (1859kB/s)(17.8MiB/10011msec) 00:34:41.691 slat (nsec): min=9451, max=85625, avg=39359.17, stdev=11864.73 00:34:41.691 clat (usec): min=15213, max=44642, avg=34910.08, stdev=4024.70 00:34:41.691 lat (usec): min=15287, max=44667, avg=34949.44, stdev=4024.27 00:34:41.691 clat percentiles (usec): 00:34:41.691 | 1.00th=[19792], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:41.691 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.691 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43779], 00:34:41.691 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:34:41.691 | 99.99th=[44827] 00:34:41.691 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1811.20, stdev=177.53, samples=20 00:34:41.691 iops : min= 352, max= 512, avg=452.80, stdev=44.38, samples=20 00:34:41.691 lat (msec) : 20=1.06%, 50=98.94% 00:34:41.691 cpu : usr=98.42%, sys=1.15%, ctx=14, majf=0, minf=43 00:34:41.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:41.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 issued rwts: total=4544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.691 filename2: (groupid=0, jobs=1): err= 0: pid=1004347: Wed Nov 6 09:10:53 2024 00:34:41.691 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.6MiB/10006msec) 00:34:41.691 slat (usec): min=8, max=117, avg=42.70, stdev=18.34 00:34:41.691 clat (usec): min=12552, max=84237, avg=35093.20, stdev=4600.84 00:34:41.691 lat (usec): min=12560, max=84255, avg=35135.90, stdev=4597.24 00:34:41.691 clat percentiles (usec): 00:34:41.691 | 1.00th=[31851], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:41.691 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:41.691 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43779], 95.00th=[43779], 00:34:41.691 | 99.00th=[44303], 99.50th=[44827], 99.90th=[73925], 99.95th=[73925], 00:34:41.691 | 99.99th=[84411] 00:34:41.691 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1792.00, stdev=170.67, samples=19 00:34:41.691 iops : min= 352, max= 480, avg=448.00, stdev=42.67, samples=19 00:34:41.691 lat (msec) : 20=0.35%, 50=99.29%, 100=0.35% 00:34:41.691 cpu : usr=96.21%, sys=2.36%, ctx=371, majf=0, minf=37 00:34:41.691 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.691 filename2: (groupid=0, jobs=1): err= 0: pid=1004348: Wed Nov 6 09:10:53 2024 00:34:41.691 read: IOPS=456, BW=1825KiB/s (1869kB/s)(17.9MiB/10029msec) 00:34:41.691 slat (nsec): min=7199, max=88654, avg=32553.69, stdev=14623.41 00:34:41.691 clat (usec): min=6009, max=44676, avg=34808.37, stdev=4593.04 00:34:41.691 lat (usec): min=6018, max=44701, avg=34840.92, stdev=4590.50 00:34:41.691 clat percentiles (usec): 00:34:41.691 | 1.00th=[15664], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:34:41.691 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.691 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43779], 95.00th=[43779], 00:34:41.691 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:34:41.691 | 99.99th=[44827] 00:34:41.691 bw ( KiB/s): min= 1408, max= 2176, per=4.21%, avg=1824.00, stdev=175.58, samples=20 00:34:41.691 iops : min= 352, max= 544, avg=456.00, stdev=43.89, samples=20 00:34:41.691 lat (msec) : 10=0.35%, 20=1.38%, 50=98.27% 00:34:41.691 cpu : usr=96.72%, sys=2.14%, ctx=170, majf=0, minf=60 00:34:41.691 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 issued rwts: total=4576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.691 filename2: (groupid=0, jobs=1): err= 0: pid=1004349: Wed Nov 6 09:10:53 2024 00:34:41.691 read: IOPS=452, BW=1808KiB/s (1852kB/s)(17.7MiB/10015msec) 00:34:41.691 slat (usec): min=8, max=124, avg=40.42, stdev=23.16 00:34:41.691 clat (usec): min=19035, max=48422, avg=35041.07, stdev=3784.34 00:34:41.691 lat (usec): min=19046, max=48444, avg=35081.49, stdev=3777.53 00:34:41.691 clat percentiles (usec): 00:34:41.691 | 1.00th=[32113], 5.00th=[32637], 10.00th=[33162], 20.00th=[33162], 00:34:41.691 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:41.691 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43779], 95.00th=[43779], 00:34:41.691 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[46400], 00:34:41.691 | 99.99th=[48497] 00:34:41.691 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1804.95, stdev=170.71, samples=20 00:34:41.691 iops : min= 352, max= 480, avg=451.20, stdev=42.68, samples=20 00:34:41.691 lat (msec) : 20=0.04%, 50=99.96% 00:34:41.691 cpu : usr=96.30%, sys=2.30%, ctx=215, majf=0, minf=42 00:34:41.691 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.691 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.691 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.691 filename2: (groupid=0, jobs=1): err= 0: pid=1004350: Wed Nov 6 09:10:53 2024 00:34:41.691 read: IOPS=450, BW=1803KiB/s (1846kB/s)(17.6MiB/10005msec) 00:34:41.691 slat (usec): min=8, max=110, avg=40.61, stdev=18.16 00:34:41.691 clat (usec): min=12704, max=74122, avg=35133.24, stdev=4525.81 00:34:41.691 lat (usec): min=12739, max=74165, avg=35173.85, stdev=4523.02 00:34:41.691 clat percentiles (usec): 00:34:41.691 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:41.691 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:41.691 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43779], 95.00th=[43779], 00:34:41.691 | 99.00th=[44303], 99.50th=[44827], 99.90th=[73925], 99.95th=[73925], 00:34:41.691 | 99.99th=[73925] 00:34:41.691 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1791.16, stdev=170.04, samples=19 00:34:41.691 iops : min= 352, max= 480, avg=447.79, stdev=42.51, samples=19 00:34:41.691 lat (msec) : 20=0.31%, 50=99.33%, 100=0.35% 00:34:41.691 cpu : usr=97.68%, sys=1.48%, ctx=144, majf=0, minf=26 00:34:41.691 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:34:41.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.692 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.692 issued rwts: total=4510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.692 filename2: (groupid=0, jobs=1): err= 0: pid=1004351: Wed Nov 6 09:10:53 2024 00:34:41.692 read: IOPS=451, BW=1807KiB/s (1851kB/s)(17.7MiB/10022msec) 00:34:41.692 slat (nsec): min=8482, max=84664, avg=27649.98, stdev=12240.55 00:34:41.692 clat (usec): min=23579, max=45001, avg=35188.60, stdev=3758.16 00:34:41.692 lat (usec): min=23596, max=45028, avg=35216.25, stdev=3753.61 00:34:41.692 clat percentiles (usec): 00:34:41.692 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:41.692 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.692 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:34:41.692 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44827], 99.95th=[44827], 00:34:41.692 | 99.99th=[44827] 00:34:41.692 bw ( KiB/s): min= 1408, max= 1920, per=4.16%, avg=1804.95, stdev=165.58, samples=20 00:34:41.692 iops : min= 352, max= 480, avg=451.20, stdev=41.40, samples=20 00:34:41.692 lat (msec) : 50=100.00% 00:34:41.692 cpu : usr=97.72%, sys=1.49%, ctx=108, majf=0, minf=31 00:34:41.692 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:41.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.692 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.692 filename2: (groupid=0, jobs=1): err= 0: pid=1004352: Wed Nov 6 09:10:53 2024 00:34:41.692 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.6MiB/10006msec) 00:34:41.692 slat (nsec): min=8219, max=63649, avg=30638.11, stdev=10355.71 00:34:41.692 clat (usec): min=18260, max=59669, avg=35206.14, stdev=4065.53 00:34:41.692 lat (usec): min=18301, max=59695, avg=35236.77, stdev=4065.37 00:34:41.692 clat percentiles (usec): 00:34:41.692 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:41.692 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:34:41.692 | 70.00th=[33817], 80.00th=[34866], 90.00th=[43779], 95.00th=[43779], 00:34:41.692 | 99.00th=[44303], 99.50th=[44303], 99.90th=[59507], 99.95th=[59507], 00:34:41.692 | 99.99th=[59507] 00:34:41.692 bw ( KiB/s): min= 1408, max= 1920, per=4.14%, avg=1792.16, stdev=170.54, samples=19 00:34:41.692 iops : min= 352, max= 480, avg=448.00, stdev=42.67, samples=19 00:34:41.692 lat (msec) : 20=0.49%, 50=99.16%, 100=0.35% 00:34:41.692 cpu : usr=96.97%, sys=1.94%, ctx=127, majf=0, minf=33 00:34:41.692 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:41.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.692 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.692 filename2: (groupid=0, jobs=1): err= 0: pid=1004353: Wed Nov 6 09:10:53 2024 00:34:41.692 read: IOPS=450, BW=1804KiB/s (1847kB/s)(17.6MiB/10006msec) 00:34:41.692 slat (usec): min=11, max=115, avg=52.67, stdev=22.99 00:34:41.692 clat (usec): min=22156, max=52627, avg=35020.47, stdev=3723.19 00:34:41.692 lat (usec): min=22244, max=52672, avg=35073.14, stdev=3730.66 00:34:41.692 clat percentiles (usec): 00:34:41.692 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:34:41.692 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:34:41.692 | 70.00th=[33817], 80.00th=[34341], 90.00th=[43254], 95.00th=[43254], 00:34:41.692 | 99.00th=[44303], 99.50th=[44827], 99.90th=[52691], 99.95th=[52691], 00:34:41.692 | 99.99th=[52691] 00:34:41.692 bw ( KiB/s): min= 1408, max= 1920, per=4.13%, avg=1792.00, stdev=175.92, samples=19 00:34:41.692 iops : min= 352, max= 480, avg=448.00, stdev=43.98, samples=19 00:34:41.692 lat (msec) : 50=99.65%, 100=0.35% 00:34:41.692 cpu : usr=98.21%, sys=1.36%, ctx=15, majf=0, minf=37 00:34:41.692 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:41.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.692 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.692 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.692 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:41.692 00:34:41.692 Run status group 0 (all jobs): 00:34:41.692 READ: bw=42.3MiB/s (44.4MB/s), 1803KiB/s-1825KiB/s (1846kB/s-1869kB/s), io=424MiB (445MB), run=10001-10029msec 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 bdev_null0 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 [2024-11-06 09:10:54.015971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.692 bdev_null1 00:34:41.692 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.693 { 00:34:41.693 "params": { 00:34:41.693 "name": "Nvme$subsystem", 00:34:41.693 "trtype": "$TEST_TRANSPORT", 00:34:41.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.693 "adrfam": "ipv4", 00:34:41.693 "trsvcid": "$NVMF_PORT", 00:34:41.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.693 "hdgst": ${hdgst:-false}, 00:34:41.693 "ddgst": ${ddgst:-false} 00:34:41.693 }, 00:34:41.693 "method": "bdev_nvme_attach_controller" 00:34:41.693 } 00:34:41.693 EOF 00:34:41.693 )") 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:41.693 { 00:34:41.693 "params": { 00:34:41.693 "name": "Nvme$subsystem", 00:34:41.693 "trtype": "$TEST_TRANSPORT", 00:34:41.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.693 "adrfam": "ipv4", 00:34:41.693 "trsvcid": "$NVMF_PORT", 00:34:41.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.693 "hdgst": ${hdgst:-false}, 00:34:41.693 "ddgst": ${ddgst:-false} 00:34:41.693 }, 00:34:41.693 "method": "bdev_nvme_attach_controller" 00:34:41.693 } 00:34:41.693 EOF 00:34:41.693 )") 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:41.693 "params": { 00:34:41.693 "name": "Nvme0", 00:34:41.693 "trtype": "tcp", 00:34:41.693 "traddr": "10.0.0.2", 00:34:41.693 "adrfam": "ipv4", 00:34:41.693 "trsvcid": "4420", 00:34:41.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:41.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:41.693 "hdgst": false, 00:34:41.693 "ddgst": false 00:34:41.693 }, 00:34:41.693 "method": "bdev_nvme_attach_controller" 00:34:41.693 },{ 00:34:41.693 "params": { 00:34:41.693 "name": "Nvme1", 00:34:41.693 "trtype": "tcp", 00:34:41.693 "traddr": "10.0.0.2", 00:34:41.693 "adrfam": "ipv4", 00:34:41.693 "trsvcid": "4420", 00:34:41.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.693 "hdgst": false, 00:34:41.693 "ddgst": false 00:34:41.693 }, 00:34:41.693 "method": "bdev_nvme_attach_controller" 00:34:41.693 }' 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.693 09:10:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.693 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:41.693 ... 00:34:41.693 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:41.693 ... 00:34:41.693 fio-3.35 00:34:41.693 Starting 4 threads 00:34:48.250 00:34:48.250 filename0: (groupid=0, jobs=1): err= 0: pid=1005731: Wed Nov 6 09:11:00 2024 00:34:48.250 read: IOPS=1881, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5002msec) 00:34:48.250 slat (nsec): min=5773, max=64498, avg=16721.47, stdev=8651.87 00:34:48.250 clat (usec): min=892, max=7775, avg=4195.52, stdev=508.53 00:34:48.250 lat (usec): min=911, max=7788, avg=4212.24, stdev=508.95 00:34:48.250 clat percentiles (usec): 00:34:48.250 | 1.00th=[ 2638], 5.00th=[ 3359], 10.00th=[ 3621], 20.00th=[ 3949], 00:34:48.250 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:34:48.250 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4752], 00:34:48.250 | 99.00th=[ 5932], 99.50th=[ 6456], 99.90th=[ 7373], 99.95th=[ 7570], 00:34:48.250 | 99.99th=[ 7767] 00:34:48.250 bw ( KiB/s): min=14669, max=15696, per=25.53%, avg=15044.50, stdev=375.49, samples=10 00:34:48.250 iops : min= 1833, max= 1962, avg=1880.50, stdev=47.01, samples=10 00:34:48.250 lat (usec) : 1000=0.02% 00:34:48.250 lat (msec) : 2=0.36%, 4=21.75%, 10=77.87% 00:34:48.250 cpu : usr=94.60%, sys=4.82%, ctx=34, majf=0, minf=9 00:34:48.250 IO depths : 1=0.5%, 2=13.3%, 4=59.0%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.250 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.250 issued rwts: total=9409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.250 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.250 filename0: (groupid=0, jobs=1): err= 0: pid=1005732: Wed Nov 6 09:11:00 2024 00:34:48.250 read: IOPS=1823, BW=14.2MiB/s (14.9MB/s)(71.3MiB/5003msec) 00:34:48.250 slat (nsec): min=5485, max=73349, avg=17016.92, stdev=9457.33 00:34:48.250 clat (usec): min=813, max=7839, avg=4324.23, stdev=660.76 00:34:48.250 lat (usec): min=826, max=7853, avg=4341.24, stdev=660.68 00:34:48.250 clat percentiles (usec): 00:34:48.250 | 1.00th=[ 2147], 5.00th=[ 3523], 10.00th=[ 3818], 20.00th=[ 4113], 00:34:48.250 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:34:48.250 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4752], 95.00th=[ 5407], 00:34:48.250 | 99.00th=[ 7111], 99.50th=[ 7308], 99.90th=[ 7701], 99.95th=[ 7767], 00:34:48.250 | 99.99th=[ 7832] 00:34:48.250 bw ( KiB/s): min=14288, max=14832, per=24.76%, avg=14587.20, stdev=164.91, samples=10 00:34:48.250 iops : min= 1786, max= 1854, avg=1823.40, stdev=20.61, samples=10 00:34:48.250 lat (usec) : 1000=0.07% 00:34:48.250 lat (msec) : 2=0.79%, 4=13.52%, 10=85.62% 00:34:48.250 cpu : usr=95.18%, sys=4.32%, ctx=8, majf=0, minf=9 00:34:48.250 IO depths : 1=0.4%, 2=14.2%, 4=58.5%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.250 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.250 issued rwts: total=9125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.250 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.250 filename1: (groupid=0, jobs=1): err= 0: pid=1005733: Wed Nov 6 09:11:00 2024 00:34:48.250 read: IOPS=1815, BW=14.2MiB/s (14.9MB/s)(70.9MiB/5001msec) 00:34:48.250 slat (nsec): min=5873, max=71093, avg=17138.27, stdev=9529.89 00:34:48.250 clat (usec): min=816, max=8019, avg=4343.44, stdev=690.85 00:34:48.250 lat (usec): min=829, max=8041, avg=4360.57, stdev=690.61 00:34:48.250 clat percentiles (usec): 00:34:48.250 | 1.00th=[ 1876], 5.00th=[ 3523], 10.00th=[ 3851], 20.00th=[ 4113], 00:34:48.250 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:34:48.250 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4883], 95.00th=[ 5538], 00:34:48.250 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 7832], 99.95th=[ 7963], 00:34:48.250 | 99.99th=[ 8029] 00:34:48.250 bw ( KiB/s): min=14304, max=14688, per=24.64%, avg=14518.00, stdev=128.07, samples=10 00:34:48.250 iops : min= 1788, max= 1836, avg=1814.70, stdev=16.07, samples=10 00:34:48.250 lat (usec) : 1000=0.11% 00:34:48.250 lat (msec) : 2=0.94%, 4=12.40%, 10=86.55% 00:34:48.250 cpu : usr=94.86%, sys=4.62%, ctx=9, majf=0, minf=0 00:34:48.250 IO depths : 1=0.2%, 2=14.9%, 4=57.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.250 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.250 issued rwts: total=9080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.250 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.250 filename1: (groupid=0, jobs=1): err= 0: pid=1005734: Wed Nov 6 09:11:00 2024 00:34:48.250 read: IOPS=1846, BW=14.4MiB/s (15.1MB/s)(72.1MiB/5002msec) 00:34:48.250 slat (nsec): min=5362, max=73336, avg=17676.75, stdev=9629.27 00:34:48.250 clat (usec): min=841, max=7876, avg=4268.39, stdev=559.13 00:34:48.250 lat (usec): min=848, max=7896, avg=4286.06, stdev=559.36 00:34:48.250 clat percentiles (usec): 00:34:48.250 | 1.00th=[ 2376], 5.00th=[ 3490], 10.00th=[ 3785], 20.00th=[ 4080], 00:34:48.250 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4293], 00:34:48.250 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5080], 00:34:48.250 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7504], 99.95th=[ 7701], 00:34:48.250 | 99.99th=[ 7898] 00:34:48.250 bw ( KiB/s): min=14496, max=15184, per=25.05%, avg=14761.60, stdev=197.00, samples=10 00:34:48.250 iops : min= 1812, max= 1898, avg=1845.20, stdev=24.63, samples=10 00:34:48.250 lat (usec) : 1000=0.06% 00:34:48.250 lat (msec) : 2=0.57%, 4=16.32%, 10=83.04% 00:34:48.250 cpu : usr=94.96%, sys=4.50%, ctx=8, majf=0, minf=0 00:34:48.250 IO depths : 1=0.5%, 2=16.6%, 4=56.2%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.250 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.250 issued rwts: total=9234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.250 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.250 00:34:48.250 Run status group 0 (all jobs): 00:34:48.250 READ: bw=57.5MiB/s (60.3MB/s), 14.2MiB/s-14.7MiB/s (14.9MB/s-15.4MB/s), io=288MiB (302MB), run=5001-5003msec 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.250 00:34:48.250 real 0m24.797s 00:34:48.250 user 4m33.024s 00:34:48.250 sys 0m6.588s 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:48.250 09:11:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.250 ************************************ 00:34:48.250 END TEST fio_dif_rand_params 00:34:48.250 ************************************ 00:34:48.250 09:11:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:48.250 09:11:00 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:48.250 09:11:00 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:48.250 09:11:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:48.250 ************************************ 00:34:48.250 START TEST fio_dif_digest 00:34:48.250 ************************************ 00:34:48.250 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:34:48.250 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:48.250 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:48.250 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:48.250 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.251 bdev_null0 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.251 [2024-11-06 09:11:00.758111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:48.251 { 00:34:48.251 "params": { 00:34:48.251 "name": "Nvme$subsystem", 00:34:48.251 "trtype": "$TEST_TRANSPORT", 00:34:48.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:48.251 "adrfam": "ipv4", 00:34:48.251 "trsvcid": "$NVMF_PORT", 00:34:48.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:48.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:48.251 "hdgst": ${hdgst:-false}, 00:34:48.251 "ddgst": ${ddgst:-false} 00:34:48.251 }, 00:34:48.251 "method": "bdev_nvme_attach_controller" 00:34:48.251 } 00:34:48.251 EOF 00:34:48.251 )") 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:48.251 "params": { 00:34:48.251 "name": "Nvme0", 00:34:48.251 "trtype": "tcp", 00:34:48.251 "traddr": "10.0.0.2", 00:34:48.251 "adrfam": "ipv4", 00:34:48.251 "trsvcid": "4420", 00:34:48.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:48.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:48.251 "hdgst": true, 00:34:48.251 "ddgst": true 00:34:48.251 }, 00:34:48.251 "method": "bdev_nvme_attach_controller" 00:34:48.251 }' 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:48.251 09:11:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.251 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:48.251 ... 00:34:48.251 fio-3.35 00:34:48.251 Starting 3 threads 00:35:00.446 00:35:00.446 filename0: (groupid=0, jobs=1): err= 0: pid=1006607: Wed Nov 6 09:11:11 2024 00:35:00.446 read: IOPS=201, BW=25.2MiB/s (26.5MB/s)(254MiB/10046msec) 00:35:00.446 slat (nsec): min=7663, max=37453, avg=13934.66, stdev=3161.29 00:35:00.446 clat (usec): min=8810, max=54602, avg=14822.27, stdev=1942.66 00:35:00.446 lat (usec): min=8823, max=54614, avg=14836.20, stdev=1942.58 00:35:00.446 clat percentiles (usec): 00:35:00.446 | 1.00th=[10028], 5.00th=[11076], 10.00th=[13042], 20.00th=[13960], 00:35:00.446 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15139], 00:35:00.446 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:35:00.446 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19268], 99.95th=[50070], 00:35:00.446 | 99.99th=[54789] 00:35:00.446 bw ( KiB/s): min=24064, max=27648, per=34.10%, avg=25922.70, stdev=935.63, samples=20 00:35:00.446 iops : min= 188, max= 216, avg=202.50, stdev= 7.28, samples=20 00:35:00.446 lat (msec) : 10=1.04%, 20=98.87%, 50=0.05%, 100=0.05% 00:35:00.446 cpu : usr=91.92%, sys=7.55%, ctx=24, majf=0, minf=90 00:35:00.446 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.446 issued rwts: total=2028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:00.446 filename0: (groupid=0, jobs=1): err= 0: pid=1006608: Wed Nov 6 09:11:11 2024 00:35:00.446 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(228MiB/10045msec) 00:35:00.446 slat (nsec): min=7493, max=38735, avg=13787.37, stdev=3072.03 00:35:00.446 clat (usec): min=11480, max=59159, avg=16491.11, stdev=6064.72 00:35:00.446 lat (usec): min=11493, max=59177, avg=16504.90, stdev=6064.75 00:35:00.446 clat percentiles (usec): 00:35:00.446 | 1.00th=[13435], 5.00th=[13960], 10.00th=[14353], 20.00th=[14746], 00:35:00.446 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[15795], 00:35:00.446 | 70.00th=[16188], 80.00th=[16450], 90.00th=[16909], 95.00th=[17695], 00:35:00.446 | 99.00th=[56361], 99.50th=[56886], 99.90th=[58983], 99.95th=[58983], 00:35:00.446 | 99.99th=[58983] 00:35:00.446 bw ( KiB/s): min=20224, max=25344, per=30.64%, avg=23296.00, stdev=1606.25, samples=20 00:35:00.446 iops : min= 158, max= 198, avg=182.00, stdev=12.55, samples=20 00:35:00.446 lat (msec) : 20=97.75%, 50=0.05%, 100=2.19% 00:35:00.446 cpu : usr=92.70%, sys=6.79%, ctx=17, majf=0, minf=111 00:35:00.446 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.446 issued rwts: total=1823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:00.446 filename0: (groupid=0, jobs=1): err= 0: pid=1006609: Wed Nov 6 09:11:11 2024 00:35:00.446 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(265MiB/10048msec) 00:35:00.446 slat (nsec): min=7061, max=61449, avg=13568.71, stdev=3328.97 00:35:00.446 clat (usec): min=8296, max=52820, avg=14201.62, stdev=1900.06 00:35:00.446 lat (usec): min=8310, max=52849, avg=14215.19, stdev=1900.23 00:35:00.446 clat percentiles (usec): 00:35:00.446 | 1.00th=[ 9503], 5.00th=[10552], 10.00th=[12256], 20.00th=[13304], 00:35:00.446 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14484], 60.00th=[14615], 00:35:00.446 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16319], 00:35:00.446 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18744], 99.95th=[49021], 00:35:00.446 | 99.99th=[52691] 00:35:00.446 bw ( KiB/s): min=25856, max=28416, per=35.59%, avg=27059.20, stdev=818.44, samples=20 00:35:00.446 iops : min= 202, max= 222, avg=211.40, stdev= 6.39, samples=20 00:35:00.446 lat (msec) : 10=2.46%, 20=97.45%, 50=0.05%, 100=0.05% 00:35:00.446 cpu : usr=92.25%, sys=7.24%, ctx=14, majf=0, minf=192 00:35:00.446 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:00.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:00.446 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:00.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:00.446 00:35:00.446 Run status group 0 (all jobs): 00:35:00.446 READ: bw=74.2MiB/s (77.8MB/s), 22.7MiB/s-26.3MiB/s (23.8MB/s-27.6MB/s), io=746MiB (782MB), run=10045-10048msec 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:00.446 00:35:00.446 real 0m11.259s 00:35:00.446 user 0m29.032s 00:35:00.446 sys 0m2.444s 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:00.446 09:11:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:00.446 ************************************ 00:35:00.446 END TEST fio_dif_digest 00:35:00.446 ************************************ 00:35:00.446 09:11:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:00.446 09:11:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:00.446 rmmod nvme_tcp 00:35:00.446 rmmod nvme_fabrics 00:35:00.446 rmmod nvme_keyring 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 1000440 ']' 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 1000440 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1000440 ']' 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1000440 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1000440 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1000440' 00:35:00.446 killing process with pid 1000440 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1000440 00:35:00.446 09:11:12 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1000440 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:35:00.446 09:11:12 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:00.446 Waiting for block devices as requested 00:35:00.446 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:00.446 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:00.446 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:00.704 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:00.704 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:00.704 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:00.704 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:00.963 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:00.963 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:01.221 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:01.221 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:01.221 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:01.221 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:01.479 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:01.479 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:01.479 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:01.479 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:01.739 09:11:14 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:01.739 09:11:14 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:01.739 09:11:14 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:01.739 09:11:14 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:35:01.739 09:11:14 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:01.739 09:11:14 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:35:01.739 09:11:14 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:01.739 09:11:14 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:01.739 09:11:14 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:01.739 09:11:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:01.739 09:11:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.680 09:11:16 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:03.680 00:35:03.680 real 1m7.807s 00:35:03.680 user 6m31.103s 00:35:03.680 sys 0m17.860s 00:35:03.680 09:11:16 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:03.680 09:11:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:03.680 ************************************ 00:35:03.680 END TEST nvmf_dif 00:35:03.680 ************************************ 00:35:03.680 09:11:16 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:03.680 09:11:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:03.680 09:11:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:03.680 09:11:16 -- common/autotest_common.sh@10 -- # set +x 00:35:03.938 ************************************ 00:35:03.938 START TEST nvmf_abort_qd_sizes 00:35:03.938 ************************************ 00:35:03.938 09:11:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:03.938 * Looking for test storage... 00:35:03.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1689 -- # lcov --version 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:03.938 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:35:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.939 --rc genhtml_branch_coverage=1 00:35:03.939 --rc genhtml_function_coverage=1 00:35:03.939 --rc genhtml_legend=1 00:35:03.939 --rc geninfo_all_blocks=1 00:35:03.939 --rc geninfo_unexecuted_blocks=1 00:35:03.939 00:35:03.939 ' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:35:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.939 --rc genhtml_branch_coverage=1 00:35:03.939 --rc genhtml_function_coverage=1 00:35:03.939 --rc genhtml_legend=1 00:35:03.939 --rc geninfo_all_blocks=1 00:35:03.939 --rc geninfo_unexecuted_blocks=1 00:35:03.939 00:35:03.939 ' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:35:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.939 --rc genhtml_branch_coverage=1 00:35:03.939 --rc genhtml_function_coverage=1 00:35:03.939 --rc genhtml_legend=1 00:35:03.939 --rc geninfo_all_blocks=1 00:35:03.939 --rc geninfo_unexecuted_blocks=1 00:35:03.939 00:35:03.939 ' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:35:03.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:03.939 --rc genhtml_branch_coverage=1 00:35:03.939 --rc genhtml_function_coverage=1 00:35:03.939 --rc genhtml_legend=1 00:35:03.939 --rc geninfo_all_blocks=1 00:35:03.939 --rc geninfo_unexecuted_blocks=1 00:35:03.939 00:35:03.939 ' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:03.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:03.939 09:11:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:06.470 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:06.470 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:06.470 Found net devices under 0000:09:00.0: cvl_0_0 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:06.470 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:06.471 Found net devices under 0000:09:00.1: cvl_0_1 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:06.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:06.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:35:06.471 00:35:06.471 --- 10.0.0.2 ping statistics --- 00:35:06.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.471 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:06.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:06.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:35:06.471 00:35:06.471 --- 10.0.0.1 ping statistics --- 00:35:06.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:06.471 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:06.471 09:11:19 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:07.499 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:07.499 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:07.499 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:07.499 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:07.499 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:07.499 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:07.499 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:07.499 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:07.499 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:07.499 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:07.499 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:07.499 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:07.499 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:07.499 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:07.499 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:07.499 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:08.439 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=1011490 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 1011490 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1011490 ']' 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:08.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:08.439 09:11:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:08.698 [2024-11-06 09:11:21.758161] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:35:08.698 [2024-11-06 09:11:21.758253] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:08.698 [2024-11-06 09:11:21.828275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:08.698 [2024-11-06 09:11:21.888049] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:08.698 [2024-11-06 09:11:21.888102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:08.698 [2024-11-06 09:11:21.888132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:08.698 [2024-11-06 09:11:21.888145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:08.698 [2024-11-06 09:11:21.888155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:08.698 [2024-11-06 09:11:21.889609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:08.698 [2024-11-06 09:11:21.889675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:08.698 [2024-11-06 09:11:21.889741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:08.698 [2024-11-06 09:11:21.889744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:08.956 09:11:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:08.956 ************************************ 00:35:08.956 START TEST spdk_target_abort 00:35:08.956 ************************************ 00:35:08.956 09:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:08.957 09:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:08.957 09:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:35:08.957 09:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.957 09:11:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.233 spdk_targetn1 00:35:12.233 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.234 [2024-11-06 09:11:24.917979] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:12.234 [2024-11-06 09:11:24.958294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.234 09:11:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:15.510 Initializing NVMe Controllers 00:35:15.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:15.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:15.510 Initialization complete. Launching workers. 00:35:15.510 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12184, failed: 0 00:35:15.510 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1269, failed to submit 10915 00:35:15.510 success 781, unsuccessful 488, failed 0 00:35:15.510 09:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:15.510 09:11:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:18.791 Initializing NVMe Controllers 00:35:18.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:18.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:18.791 Initialization complete. Launching workers. 00:35:18.791 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8897, failed: 0 00:35:18.791 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1245, failed to submit 7652 00:35:18.791 success 315, unsuccessful 930, failed 0 00:35:18.791 09:11:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:18.791 09:11:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:22.071 Initializing NVMe Controllers 00:35:22.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:22.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:22.071 Initialization complete. Launching workers. 00:35:22.071 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30996, failed: 0 00:35:22.071 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2611, failed to submit 28385 00:35:22.071 success 498, unsuccessful 2113, failed 0 00:35:22.071 09:11:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:22.071 09:11:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.071 09:11:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:22.071 09:11:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:22.071 09:11:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:22.071 09:11:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:22.071 09:11:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1011490 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1011490 ']' 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1011490 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1011490 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1011490' 00:35:23.005 killing process with pid 1011490 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1011490 00:35:23.005 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1011490 00:35:23.263 00:35:23.263 real 0m14.273s 00:35:23.263 user 0m53.986s 00:35:23.263 sys 0m2.729s 00:35:23.263 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:23.263 09:11:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:23.263 ************************************ 00:35:23.263 END TEST spdk_target_abort 00:35:23.263 ************************************ 00:35:23.263 09:11:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:23.263 09:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:23.263 09:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:23.263 09:11:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:23.263 ************************************ 00:35:23.264 START TEST kernel_target_abort 00:35:23.264 ************************************ 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:23.264 09:11:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:24.638 Waiting for block devices as requested 00:35:24.638 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:24.638 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:24.638 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:24.638 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:24.897 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:24.897 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:24.897 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:25.156 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:25.156 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:25.156 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:25.414 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:25.414 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:25.414 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:25.414 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:25.672 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:25.672 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:25.672 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:25.931 09:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:35:25.931 09:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:25.931 09:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:35:25.931 09:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1646 -- # local device=nvme0n1 00:35:25.931 09:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:25.931 09:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1649 -- # [[ none != none ]] 00:35:25.931 09:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:35:25.931 09:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:25.931 09:11:38 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:25.931 No valid GPT data, bailing 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:35:25.931 00:35:25.931 Discovery Log Number of Records 2, Generation counter 2 00:35:25.931 =====Discovery Log Entry 0====== 00:35:25.931 trtype: tcp 00:35:25.931 adrfam: ipv4 00:35:25.931 subtype: current discovery subsystem 00:35:25.931 treq: not specified, sq flow control disable supported 00:35:25.931 portid: 1 00:35:25.931 trsvcid: 4420 00:35:25.931 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:25.931 traddr: 10.0.0.1 00:35:25.931 eflags: none 00:35:25.931 sectype: none 00:35:25.931 =====Discovery Log Entry 1====== 00:35:25.931 trtype: tcp 00:35:25.931 adrfam: ipv4 00:35:25.931 subtype: nvme subsystem 00:35:25.931 treq: not specified, sq flow control disable supported 00:35:25.931 portid: 1 00:35:25.931 trsvcid: 4420 00:35:25.931 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:25.931 traddr: 10.0.0.1 00:35:25.931 eflags: none 00:35:25.931 sectype: none 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:25.931 09:11:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:29.210 Initializing NVMe Controllers 00:35:29.210 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:29.210 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:29.210 Initialization complete. Launching workers. 00:35:29.210 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 47928, failed: 0 00:35:29.210 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 47928, failed to submit 0 00:35:29.210 success 0, unsuccessful 47928, failed 0 00:35:29.210 09:11:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:29.210 09:11:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:32.489 Initializing NVMe Controllers 00:35:32.489 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:32.489 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:32.489 Initialization complete. Launching workers. 00:35:32.489 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96325, failed: 0 00:35:32.489 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21274, failed to submit 75051 00:35:32.489 success 0, unsuccessful 21274, failed 0 00:35:32.489 09:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:32.489 09:11:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:35.770 Initializing NVMe Controllers 00:35:35.770 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:35.770 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:35.770 Initialization complete. Launching workers. 00:35:35.770 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87352, failed: 0 00:35:35.770 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21798, failed to submit 65554 00:35:35.770 success 0, unsuccessful 21798, failed 0 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:35:35.770 09:11:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:36.706 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:36.706 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:36.706 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:36.706 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:36.706 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:36.706 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:36.706 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:36.706 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:36.706 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:36.706 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:36.706 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:36.706 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:36.706 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:36.706 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:36.706 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:36.706 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:37.644 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:37.902 00:35:37.902 real 0m14.560s 00:35:37.902 user 0m6.221s 00:35:37.902 sys 0m3.558s 00:35:37.902 09:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:37.902 09:11:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.902 ************************************ 00:35:37.902 END TEST kernel_target_abort 00:35:37.902 ************************************ 00:35:37.902 09:11:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:37.902 09:11:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:37.902 09:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:37.902 09:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:37.902 09:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:37.902 09:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:37.902 09:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:37.902 09:11:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:37.902 rmmod nvme_tcp 00:35:37.902 rmmod nvme_fabrics 00:35:37.902 rmmod nvme_keyring 00:35:37.902 09:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:37.902 09:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:37.902 09:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:37.902 09:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 1011490 ']' 00:35:37.902 09:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 1011490 00:35:37.902 09:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1011490 ']' 00:35:37.903 09:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1011490 00:35:37.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1011490) - No such process 00:35:37.903 09:11:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1011490 is not found' 00:35:37.903 Process with pid 1011490 is not found 00:35:37.903 09:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:35:37.903 09:11:51 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:39.278 Waiting for block devices as requested 00:35:39.278 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:39.278 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:39.278 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:39.278 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:39.278 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:39.537 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:39.537 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:39.537 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:39.537 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:39.795 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:39.795 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:39.795 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:40.054 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:40.054 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:40.054 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:40.054 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:40.312 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:40.312 09:11:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:40.313 09:11:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:42.850 09:11:55 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:42.850 00:35:42.850 real 0m38.564s 00:35:42.850 user 1m2.443s 00:35:42.850 sys 0m9.915s 00:35:42.850 09:11:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:42.850 09:11:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.850 ************************************ 00:35:42.850 END TEST nvmf_abort_qd_sizes 00:35:42.850 ************************************ 00:35:42.850 09:11:55 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:42.850 09:11:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:42.850 09:11:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:42.850 09:11:55 -- common/autotest_common.sh@10 -- # set +x 00:35:42.850 ************************************ 00:35:42.850 START TEST keyring_file 00:35:42.850 ************************************ 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:42.850 * Looking for test storage... 00:35:42.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1689 -- # lcov --version 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:42.850 09:11:55 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:35:42.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.850 --rc genhtml_branch_coverage=1 00:35:42.850 --rc genhtml_function_coverage=1 00:35:42.850 --rc genhtml_legend=1 00:35:42.850 --rc geninfo_all_blocks=1 00:35:42.850 --rc geninfo_unexecuted_blocks=1 00:35:42.850 00:35:42.850 ' 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:35:42.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.850 --rc genhtml_branch_coverage=1 00:35:42.850 --rc genhtml_function_coverage=1 00:35:42.850 --rc genhtml_legend=1 00:35:42.850 --rc geninfo_all_blocks=1 00:35:42.850 --rc geninfo_unexecuted_blocks=1 00:35:42.850 00:35:42.850 ' 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:35:42.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.850 --rc genhtml_branch_coverage=1 00:35:42.850 --rc genhtml_function_coverage=1 00:35:42.850 --rc genhtml_legend=1 00:35:42.850 --rc geninfo_all_blocks=1 00:35:42.850 --rc geninfo_unexecuted_blocks=1 00:35:42.850 00:35:42.850 ' 00:35:42.850 09:11:55 keyring_file -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:35:42.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:42.850 --rc genhtml_branch_coverage=1 00:35:42.850 --rc genhtml_function_coverage=1 00:35:42.850 --rc genhtml_legend=1 00:35:42.850 --rc geninfo_all_blocks=1 00:35:42.850 --rc geninfo_unexecuted_blocks=1 00:35:42.850 00:35:42.850 ' 00:35:42.850 09:11:55 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:42.850 09:11:55 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:42.850 09:11:55 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:42.850 09:11:55 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.850 09:11:55 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.850 09:11:55 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:42.851 09:11:55 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:42.851 09:11:55 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.851 09:11:55 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.851 09:11:55 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.851 09:11:55 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.851 09:11:55 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.851 09:11:55 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.851 09:11:55 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:42.851 09:11:55 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:42.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6UClzmRTXO 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@731 -- # python - 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6UClzmRTXO 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6UClzmRTXO 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.6UClzmRTXO 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.QfjppVQbYI 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:35:42.851 09:11:55 keyring_file -- nvmf/common.sh@731 -- # python - 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.QfjppVQbYI 00:35:42.851 09:11:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.QfjppVQbYI 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.QfjppVQbYI 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@30 -- # tgtpid=1017294 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:42.851 09:11:55 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1017294 00:35:42.851 09:11:55 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1017294 ']' 00:35:42.851 09:11:55 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.851 09:11:55 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:42.851 09:11:55 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.851 09:11:55 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:42.851 09:11:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:42.851 [2024-11-06 09:11:55.921335] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:35:42.851 [2024-11-06 09:11:55.921423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017294 ] 00:35:42.851 [2024-11-06 09:11:55.987925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.851 [2024-11-06 09:11:56.042082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.109 09:11:56 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:43.109 09:11:56 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:43.109 09:11:56 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:43.109 09:11:56 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.109 09:11:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.109 [2024-11-06 09:11:56.307714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:43.109 null0 00:35:43.109 [2024-11-06 09:11:56.339780] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:43.109 [2024-11-06 09:11:56.340295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:43.109 09:11:56 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:43.109 09:11:56 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:43.109 09:11:56 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.110 [2024-11-06 09:11:56.363856] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:43.110 request: 00:35:43.110 { 00:35:43.110 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:43.110 "secure_channel": false, 00:35:43.110 "listen_address": { 00:35:43.110 "trtype": "tcp", 00:35:43.110 "traddr": "127.0.0.1", 00:35:43.110 "trsvcid": "4420" 00:35:43.110 }, 00:35:43.110 "method": "nvmf_subsystem_add_listener", 00:35:43.110 "req_id": 1 00:35:43.110 } 00:35:43.110 Got JSON-RPC error response 00:35:43.110 response: 00:35:43.110 { 00:35:43.110 "code": -32602, 00:35:43.110 "message": "Invalid parameters" 00:35:43.110 } 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:43.110 09:11:56 keyring_file -- keyring/file.sh@47 -- # bperfpid=1017309 00:35:43.110 09:11:56 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:43.110 09:11:56 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1017309 /var/tmp/bperf.sock 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1017309 ']' 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:43.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:43.110 09:11:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:43.367 [2024-11-06 09:11:56.414669] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:35:43.367 [2024-11-06 09:11:56.414759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017309 ] 00:35:43.367 [2024-11-06 09:11:56.481956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.367 [2024-11-06 09:11:56.540812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.367 09:11:56 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:43.367 09:11:56 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:43.367 09:11:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6UClzmRTXO 00:35:43.367 09:11:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6UClzmRTXO 00:35:43.932 09:11:56 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QfjppVQbYI 00:35:43.932 09:11:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QfjppVQbYI 00:35:43.932 09:11:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:43.932 09:11:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:43.932 09:11:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:43.932 09:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:43.932 09:11:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.190 09:11:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.6UClzmRTXO == \/\t\m\p\/\t\m\p\.\6\U\C\l\z\m\R\T\X\O ]] 00:35:44.190 09:11:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:44.190 09:11:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:44.190 09:11:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.190 09:11:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:44.190 09:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.755 09:11:57 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.QfjppVQbYI == \/\t\m\p\/\t\m\p\.\Q\f\j\p\p\V\Q\b\Y\I ]] 00:35:44.755 09:11:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:44.755 09:11:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:44.755 09:11:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.755 09:11:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.755 09:11:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:44.755 09:11:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:44.755 09:11:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:44.755 09:11:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:44.755 09:11:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:44.755 09:11:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:44.755 09:11:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:44.755 09:11:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:44.755 09:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.013 09:11:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:45.013 09:11:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.013 09:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:45.270 [2024-11-06 09:11:58.551639] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:45.528 nvme0n1 00:35:45.528 09:11:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:45.528 09:11:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:45.528 09:11:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.528 09:11:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.528 09:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.528 09:11:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:45.786 09:11:58 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:45.786 09:11:58 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:45.786 09:11:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:45.786 09:11:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:45.786 09:11:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:45.786 09:11:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:45.786 09:11:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:46.043 09:11:59 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:46.043 09:11:59 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:46.043 Running I/O for 1 seconds... 00:35:47.417 10185.00 IOPS, 39.79 MiB/s 00:35:47.417 Latency(us) 00:35:47.417 [2024-11-06T08:12:00.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:47.417 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:47.417 nvme0n1 : 1.01 10233.03 39.97 0.00 0.00 12468.42 6310.87 20874.43 00:35:47.417 [2024-11-06T08:12:00.706Z] =================================================================================================================== 00:35:47.417 [2024-11-06T08:12:00.706Z] Total : 10233.03 39.97 0.00 0.00 12468.42 6310.87 20874.43 00:35:47.417 { 00:35:47.417 "results": [ 00:35:47.417 { 00:35:47.417 "job": "nvme0n1", 00:35:47.417 "core_mask": "0x2", 00:35:47.417 "workload": "randrw", 00:35:47.417 "percentage": 50, 00:35:47.417 "status": "finished", 00:35:47.417 "queue_depth": 128, 00:35:47.417 "io_size": 4096, 00:35:47.417 "runtime": 1.007815, 00:35:47.417 "iops": 10233.028879308207, 00:35:47.417 "mibps": 39.972769059797685, 00:35:47.417 "io_failed": 0, 00:35:47.417 "io_timeout": 0, 00:35:47.417 "avg_latency_us": 12468.423408068205, 00:35:47.417 "min_latency_us": 6310.874074074074, 00:35:47.417 "max_latency_us": 20874.42962962963 00:35:47.417 } 00:35:47.417 ], 00:35:47.417 "core_count": 1 00:35:47.417 } 00:35:47.417 09:12:00 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:47.417 09:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:47.417 09:12:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:47.417 09:12:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.417 09:12:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.417 09:12:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.417 09:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.417 09:12:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.676 09:12:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:47.676 09:12:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:47.676 09:12:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:47.676 09:12:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.676 09:12:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.676 09:12:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.676 09:12:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.933 09:12:01 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:47.933 09:12:01 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.933 09:12:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:47.933 09:12:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.933 09:12:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:47.933 09:12:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:47.933 09:12:01 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:47.933 09:12:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:47.933 09:12:01 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:47.933 09:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:48.271 [2024-11-06 09:12:01.438778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:48.271 [2024-11-06 09:12:01.439743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80d2f0 (107): Transport endpoint is not connected 00:35:48.271 [2024-11-06 09:12:01.440735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x80d2f0 (9): Bad file descriptor 00:35:48.271 [2024-11-06 09:12:01.441735] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:48.271 [2024-11-06 09:12:01.441755] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:48.271 [2024-11-06 09:12:01.441783] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:48.271 [2024-11-06 09:12:01.441798] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:48.271 request: 00:35:48.271 { 00:35:48.271 "name": "nvme0", 00:35:48.271 "trtype": "tcp", 00:35:48.271 "traddr": "127.0.0.1", 00:35:48.271 "adrfam": "ipv4", 00:35:48.271 "trsvcid": "4420", 00:35:48.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.271 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.271 "prchk_reftag": false, 00:35:48.271 "prchk_guard": false, 00:35:48.271 "hdgst": false, 00:35:48.271 "ddgst": false, 00:35:48.271 "psk": "key1", 00:35:48.271 "allow_unrecognized_csi": false, 00:35:48.271 "method": "bdev_nvme_attach_controller", 00:35:48.271 "req_id": 1 00:35:48.271 } 00:35:48.271 Got JSON-RPC error response 00:35:48.271 response: 00:35:48.271 { 00:35:48.271 "code": -5, 00:35:48.271 "message": "Input/output error" 00:35:48.271 } 00:35:48.271 09:12:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:48.271 09:12:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:48.271 09:12:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:48.271 09:12:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:48.271 09:12:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:48.271 09:12:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:48.271 09:12:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:48.271 09:12:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.271 09:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.271 09:12:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:48.544 09:12:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:48.544 09:12:01 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:48.545 09:12:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:48.545 09:12:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:48.545 09:12:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:48.545 09:12:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:48.545 09:12:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:48.802 09:12:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:48.802 09:12:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:48.802 09:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:49.061 09:12:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:49.061 09:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:49.319 09:12:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:49.319 09:12:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:49.319 09:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.577 09:12:02 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:49.577 09:12:02 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.6UClzmRTXO 00:35:49.577 09:12:02 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.6UClzmRTXO 00:35:49.577 09:12:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:49.577 09:12:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.6UClzmRTXO 00:35:49.577 09:12:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:49.577 09:12:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.577 09:12:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:49.577 09:12:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:49.577 09:12:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6UClzmRTXO 00:35:49.577 09:12:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6UClzmRTXO 00:35:49.835 [2024-11-06 09:12:03.093697] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6UClzmRTXO': 0100660 00:35:49.835 [2024-11-06 09:12:03.093732] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:49.835 request: 00:35:49.835 { 00:35:49.835 "name": "key0", 00:35:49.835 "path": "/tmp/tmp.6UClzmRTXO", 00:35:49.835 "method": "keyring_file_add_key", 00:35:49.835 "req_id": 1 00:35:49.835 } 00:35:49.835 Got JSON-RPC error response 00:35:49.835 response: 00:35:49.835 { 00:35:49.835 "code": -1, 00:35:49.835 "message": "Operation not permitted" 00:35:49.835 } 00:35:49.835 09:12:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:49.835 09:12:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:49.835 09:12:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:49.835 09:12:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:49.835 09:12:03 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.6UClzmRTXO 00:35:49.835 09:12:03 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6UClzmRTXO 00:35:49.835 09:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6UClzmRTXO 00:35:50.400 09:12:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.6UClzmRTXO 00:35:50.400 09:12:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:50.400 09:12:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:50.400 09:12:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.400 09:12:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.400 09:12:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.400 09:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.400 09:12:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:50.400 09:12:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.400 09:12:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:50.400 09:12:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.400 09:12:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:35:50.400 09:12:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.400 09:12:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:35:50.400 09:12:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:50.400 09:12:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.400 09:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:50.657 [2024-11-06 09:12:03.911969] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.6UClzmRTXO': No such file or directory 00:35:50.657 [2024-11-06 09:12:03.912003] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:50.657 [2024-11-06 09:12:03.912026] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:50.657 [2024-11-06 09:12:03.912039] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:50.657 [2024-11-06 09:12:03.912051] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:50.657 [2024-11-06 09:12:03.912063] bdev_nvme.c:6576:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:50.657 request: 00:35:50.657 { 00:35:50.657 "name": "nvme0", 00:35:50.657 "trtype": "tcp", 00:35:50.657 "traddr": "127.0.0.1", 00:35:50.657 "adrfam": "ipv4", 00:35:50.657 "trsvcid": "4420", 00:35:50.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:50.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:50.657 "prchk_reftag": false, 00:35:50.657 "prchk_guard": false, 00:35:50.657 "hdgst": false, 00:35:50.657 "ddgst": false, 00:35:50.657 "psk": "key0", 00:35:50.657 "allow_unrecognized_csi": false, 00:35:50.657 "method": "bdev_nvme_attach_controller", 00:35:50.657 "req_id": 1 00:35:50.657 } 00:35:50.657 Got JSON-RPC error response 00:35:50.657 response: 00:35:50.657 { 00:35:50.657 "code": -19, 00:35:50.657 "message": "No such device" 00:35:50.657 } 00:35:50.657 09:12:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:50.657 09:12:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:50.657 09:12:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:50.657 09:12:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:50.657 09:12:03 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:50.657 09:12:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:50.915 09:12:04 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:50.915 09:12:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:50.915 09:12:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:50.915 09:12:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:50.915 09:12:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:50.915 09:12:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:50.915 09:12:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TvyHOqoYDe 00:35:50.915 09:12:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:50.915 09:12:04 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:50.915 09:12:04 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:35:50.915 09:12:04 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:50.915 09:12:04 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:35:50.915 09:12:04 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:35:50.915 09:12:04 keyring_file -- nvmf/common.sh@731 -- # python - 00:35:51.173 09:12:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TvyHOqoYDe 00:35:51.173 09:12:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TvyHOqoYDe 00:35:51.173 09:12:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.TvyHOqoYDe 00:35:51.173 09:12:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TvyHOqoYDe 00:35:51.173 09:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TvyHOqoYDe 00:35:51.430 09:12:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.431 09:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.688 nvme0n1 00:35:51.688 09:12:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:51.688 09:12:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:51.688 09:12:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:51.688 09:12:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.688 09:12:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.688 09:12:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.945 09:12:05 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:51.945 09:12:05 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:51.945 09:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:52.203 09:12:05 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:52.203 09:12:05 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:52.203 09:12:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.203 09:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.203 09:12:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.461 09:12:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:52.461 09:12:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:52.461 09:12:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:52.461 09:12:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.461 09:12:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.461 09:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.461 09:12:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.718 09:12:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:52.719 09:12:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:52.719 09:12:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:52.976 09:12:06 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:52.976 09:12:06 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:52.976 09:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.233 09:12:06 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:53.233 09:12:06 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TvyHOqoYDe 00:35:53.233 09:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TvyHOqoYDe 00:35:53.491 09:12:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.QfjppVQbYI 00:35:53.491 09:12:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.QfjppVQbYI 00:35:54.057 09:12:07 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:54.057 09:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:54.315 nvme0n1 00:35:54.315 09:12:07 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:54.315 09:12:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:54.573 09:12:07 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:54.573 "subsystems": [ 00:35:54.573 { 00:35:54.573 "subsystem": "keyring", 00:35:54.573 "config": [ 00:35:54.573 { 00:35:54.573 "method": "keyring_file_add_key", 00:35:54.573 "params": { 00:35:54.573 "name": "key0", 00:35:54.573 "path": "/tmp/tmp.TvyHOqoYDe" 00:35:54.573 } 00:35:54.573 }, 00:35:54.573 { 00:35:54.573 "method": "keyring_file_add_key", 00:35:54.573 "params": { 00:35:54.573 "name": "key1", 00:35:54.573 "path": "/tmp/tmp.QfjppVQbYI" 00:35:54.573 } 00:35:54.573 } 00:35:54.573 ] 00:35:54.573 }, 00:35:54.573 { 00:35:54.573 "subsystem": "iobuf", 00:35:54.573 "config": [ 00:35:54.573 { 00:35:54.573 "method": "iobuf_set_options", 00:35:54.573 "params": { 00:35:54.573 "small_pool_count": 8192, 00:35:54.573 "large_pool_count": 1024, 00:35:54.573 "small_bufsize": 8192, 00:35:54.573 "large_bufsize": 135168, 00:35:54.573 "enable_numa": false 00:35:54.573 } 00:35:54.573 } 00:35:54.573 ] 00:35:54.573 }, 00:35:54.573 { 00:35:54.573 "subsystem": "sock", 00:35:54.573 "config": [ 00:35:54.573 { 00:35:54.573 "method": "sock_set_default_impl", 00:35:54.573 "params": { 00:35:54.573 "impl_name": "posix" 00:35:54.573 } 00:35:54.573 }, 00:35:54.573 { 00:35:54.573 "method": "sock_impl_set_options", 00:35:54.573 "params": { 00:35:54.573 "impl_name": "ssl", 00:35:54.573 "recv_buf_size": 4096, 00:35:54.573 "send_buf_size": 4096, 00:35:54.573 "enable_recv_pipe": true, 00:35:54.573 "enable_quickack": false, 00:35:54.573 "enable_placement_id": 0, 00:35:54.573 "enable_zerocopy_send_server": true, 00:35:54.573 "enable_zerocopy_send_client": false, 00:35:54.573 "zerocopy_threshold": 0, 00:35:54.573 "tls_version": 0, 00:35:54.573 "enable_ktls": false 00:35:54.573 } 00:35:54.573 }, 00:35:54.573 { 00:35:54.573 "method": "sock_impl_set_options", 00:35:54.573 "params": { 00:35:54.573 "impl_name": "posix", 00:35:54.573 "recv_buf_size": 2097152, 00:35:54.573 "send_buf_size": 2097152, 00:35:54.573 "enable_recv_pipe": true, 00:35:54.573 "enable_quickack": false, 00:35:54.573 "enable_placement_id": 0, 00:35:54.573 "enable_zerocopy_send_server": true, 00:35:54.573 "enable_zerocopy_send_client": false, 00:35:54.573 "zerocopy_threshold": 0, 00:35:54.573 "tls_version": 0, 00:35:54.573 "enable_ktls": false 00:35:54.573 } 00:35:54.573 } 00:35:54.573 ] 00:35:54.573 }, 00:35:54.573 { 00:35:54.573 "subsystem": "vmd", 00:35:54.573 "config": [] 00:35:54.573 }, 00:35:54.573 { 00:35:54.573 "subsystem": "accel", 00:35:54.573 "config": [ 00:35:54.573 { 00:35:54.573 "method": "accel_set_options", 00:35:54.573 "params": { 00:35:54.573 "small_cache_size": 128, 00:35:54.573 "large_cache_size": 16, 00:35:54.573 "task_count": 2048, 00:35:54.573 "sequence_count": 2048, 00:35:54.573 "buf_count": 2048 00:35:54.573 } 00:35:54.573 } 00:35:54.573 ] 00:35:54.573 }, 00:35:54.573 { 00:35:54.573 "subsystem": "bdev", 00:35:54.573 "config": [ 00:35:54.573 { 00:35:54.573 "method": "bdev_set_options", 00:35:54.573 "params": { 00:35:54.573 "bdev_io_pool_size": 65535, 00:35:54.574 "bdev_io_cache_size": 256, 00:35:54.574 "bdev_auto_examine": true, 00:35:54.574 "iobuf_small_cache_size": 128, 00:35:54.574 "iobuf_large_cache_size": 16 00:35:54.574 } 00:35:54.574 }, 00:35:54.574 { 00:35:54.574 "method": "bdev_raid_set_options", 00:35:54.574 "params": { 00:35:54.574 "process_window_size_kb": 1024, 00:35:54.574 "process_max_bandwidth_mb_sec": 0 00:35:54.574 } 00:35:54.574 }, 00:35:54.574 { 00:35:54.574 "method": "bdev_iscsi_set_options", 00:35:54.574 "params": { 00:35:54.574 "timeout_sec": 30 00:35:54.574 } 00:35:54.574 }, 00:35:54.574 { 00:35:54.574 "method": "bdev_nvme_set_options", 00:35:54.574 "params": { 00:35:54.574 "action_on_timeout": "none", 00:35:54.574 "timeout_us": 0, 00:35:54.574 "timeout_admin_us": 0, 00:35:54.574 "keep_alive_timeout_ms": 10000, 00:35:54.574 "arbitration_burst": 0, 00:35:54.574 "low_priority_weight": 0, 00:35:54.574 "medium_priority_weight": 0, 00:35:54.574 "high_priority_weight": 0, 00:35:54.574 "nvme_adminq_poll_period_us": 10000, 00:35:54.574 "nvme_ioq_poll_period_us": 0, 00:35:54.574 "io_queue_requests": 512, 00:35:54.574 "delay_cmd_submit": true, 00:35:54.574 "transport_retry_count": 4, 00:35:54.574 "bdev_retry_count": 3, 00:35:54.574 "transport_ack_timeout": 0, 00:35:54.574 "ctrlr_loss_timeout_sec": 0, 00:35:54.574 "reconnect_delay_sec": 0, 00:35:54.574 "fast_io_fail_timeout_sec": 0, 00:35:54.574 "disable_auto_failback": false, 00:35:54.574 "generate_uuids": false, 00:35:54.574 "transport_tos": 0, 00:35:54.574 "nvme_error_stat": false, 00:35:54.574 "rdma_srq_size": 0, 00:35:54.574 "io_path_stat": false, 00:35:54.574 "allow_accel_sequence": false, 00:35:54.574 "rdma_max_cq_size": 0, 00:35:54.574 "rdma_cm_event_timeout_ms": 0, 00:35:54.574 "dhchap_digests": [ 00:35:54.574 "sha256", 00:35:54.574 "sha384", 00:35:54.574 "sha512" 00:35:54.574 ], 00:35:54.574 "dhchap_dhgroups": [ 00:35:54.574 "null", 00:35:54.574 "ffdhe2048", 00:35:54.574 "ffdhe3072", 00:35:54.574 "ffdhe4096", 00:35:54.574 "ffdhe6144", 00:35:54.574 "ffdhe8192" 00:35:54.574 ] 00:35:54.574 } 00:35:54.574 }, 00:35:54.574 { 00:35:54.574 "method": "bdev_nvme_attach_controller", 00:35:54.574 "params": { 00:35:54.574 "name": "nvme0", 00:35:54.574 "trtype": "TCP", 00:35:54.574 "adrfam": "IPv4", 00:35:54.574 "traddr": "127.0.0.1", 00:35:54.574 "trsvcid": "4420", 00:35:54.574 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.574 "prchk_reftag": false, 00:35:54.574 "prchk_guard": false, 00:35:54.574 "ctrlr_loss_timeout_sec": 0, 00:35:54.574 "reconnect_delay_sec": 0, 00:35:54.574 "fast_io_fail_timeout_sec": 0, 00:35:54.574 "psk": "key0", 00:35:54.574 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.574 "hdgst": false, 00:35:54.574 "ddgst": false, 00:35:54.574 "multipath": "multipath" 00:35:54.574 } 00:35:54.574 }, 00:35:54.574 { 00:35:54.574 "method": "bdev_nvme_set_hotplug", 00:35:54.574 "params": { 00:35:54.574 "period_us": 100000, 00:35:54.574 "enable": false 00:35:54.574 } 00:35:54.574 }, 00:35:54.574 { 00:35:54.574 "method": "bdev_wait_for_examine" 00:35:54.574 } 00:35:54.574 ] 00:35:54.574 }, 00:35:54.574 { 00:35:54.574 "subsystem": "nbd", 00:35:54.574 "config": [] 00:35:54.574 } 00:35:54.574 ] 00:35:54.574 }' 00:35:54.574 09:12:07 keyring_file -- keyring/file.sh@115 -- # killprocess 1017309 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1017309 ']' 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1017309 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1017309 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1017309' 00:35:54.574 killing process with pid 1017309 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@969 -- # kill 1017309 00:35:54.574 Received shutdown signal, test time was about 1.000000 seconds 00:35:54.574 00:35:54.574 Latency(us) 00:35:54.574 [2024-11-06T08:12:07.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.574 [2024-11-06T08:12:07.863Z] =================================================================================================================== 00:35:54.574 [2024-11-06T08:12:07.863Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.574 09:12:07 keyring_file -- common/autotest_common.sh@974 -- # wait 1017309 00:35:54.832 09:12:07 keyring_file -- keyring/file.sh@118 -- # bperfpid=1019315 00:35:54.832 09:12:07 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1019315 /var/tmp/bperf.sock 00:35:54.832 09:12:07 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1019315 ']' 00:35:54.832 09:12:07 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:54.832 09:12:07 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:54.832 09:12:07 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:54.832 09:12:07 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:54.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:54.832 09:12:07 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:54.832 "subsystems": [ 00:35:54.832 { 00:35:54.832 "subsystem": "keyring", 00:35:54.832 "config": [ 00:35:54.832 { 00:35:54.832 "method": "keyring_file_add_key", 00:35:54.832 "params": { 00:35:54.832 "name": "key0", 00:35:54.832 "path": "/tmp/tmp.TvyHOqoYDe" 00:35:54.832 } 00:35:54.832 }, 00:35:54.832 { 00:35:54.832 "method": "keyring_file_add_key", 00:35:54.832 "params": { 00:35:54.832 "name": "key1", 00:35:54.832 "path": "/tmp/tmp.QfjppVQbYI" 00:35:54.832 } 00:35:54.833 } 00:35:54.833 ] 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "subsystem": "iobuf", 00:35:54.833 "config": [ 00:35:54.833 { 00:35:54.833 "method": "iobuf_set_options", 00:35:54.833 "params": { 00:35:54.833 "small_pool_count": 8192, 00:35:54.833 "large_pool_count": 1024, 00:35:54.833 "small_bufsize": 8192, 00:35:54.833 "large_bufsize": 135168, 00:35:54.833 "enable_numa": false 00:35:54.833 } 00:35:54.833 } 00:35:54.833 ] 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "subsystem": "sock", 00:35:54.833 "config": [ 00:35:54.833 { 00:35:54.833 "method": "sock_set_default_impl", 00:35:54.833 "params": { 00:35:54.833 "impl_name": "posix" 00:35:54.833 } 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "method": "sock_impl_set_options", 00:35:54.833 "params": { 00:35:54.833 "impl_name": "ssl", 00:35:54.833 "recv_buf_size": 4096, 00:35:54.833 "send_buf_size": 4096, 00:35:54.833 "enable_recv_pipe": true, 00:35:54.833 "enable_quickack": false, 00:35:54.833 "enable_placement_id": 0, 00:35:54.833 "enable_zerocopy_send_server": true, 00:35:54.833 "enable_zerocopy_send_client": false, 00:35:54.833 "zerocopy_threshold": 0, 00:35:54.833 "tls_version": 0, 00:35:54.833 "enable_ktls": false 00:35:54.833 } 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "method": "sock_impl_set_options", 00:35:54.833 "params": { 00:35:54.833 "impl_name": "posix", 00:35:54.833 "recv_buf_size": 2097152, 00:35:54.833 "send_buf_size": 2097152, 00:35:54.833 "enable_recv_pipe": true, 00:35:54.833 "enable_quickack": false, 00:35:54.833 "enable_placement_id": 0, 00:35:54.833 "enable_zerocopy_send_server": true, 00:35:54.833 "enable_zerocopy_send_client": false, 00:35:54.833 "zerocopy_threshold": 0, 00:35:54.833 "tls_version": 0, 00:35:54.833 "enable_ktls": false 00:35:54.833 } 00:35:54.833 } 00:35:54.833 ] 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "subsystem": "vmd", 00:35:54.833 "config": [] 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "subsystem": "accel", 00:35:54.833 "config": [ 00:35:54.833 { 00:35:54.833 "method": "accel_set_options", 00:35:54.833 "params": { 00:35:54.833 "small_cache_size": 128, 00:35:54.833 "large_cache_size": 16, 00:35:54.833 "task_count": 2048, 00:35:54.833 "sequence_count": 2048, 00:35:54.833 "buf_count": 2048 00:35:54.833 } 00:35:54.833 } 00:35:54.833 ] 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "subsystem": "bdev", 00:35:54.833 "config": [ 00:35:54.833 { 00:35:54.833 "method": "bdev_set_options", 00:35:54.833 "params": { 00:35:54.833 "bdev_io_pool_size": 65535, 00:35:54.833 "bdev_io_cache_size": 256, 00:35:54.833 "bdev_auto_examine": true, 00:35:54.833 "iobuf_small_cache_size": 128, 00:35:54.833 "iobuf_large_cache_size": 16 00:35:54.833 } 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "method": "bdev_raid_set_options", 00:35:54.833 "params": { 00:35:54.833 "process_window_size_kb": 1024, 00:35:54.833 "process_max_bandwidth_mb_sec": 0 00:35:54.833 } 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "method": "bdev_iscsi_set_options", 00:35:54.833 "params": { 00:35:54.833 "timeout_sec": 30 00:35:54.833 } 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "method": "bdev_nvme_set_options", 00:35:54.833 "params": { 00:35:54.833 "action_on_timeout": "none", 00:35:54.833 "timeout_us": 0, 00:35:54.833 "timeout_admin_us": 0, 00:35:54.833 "keep_alive_timeout_ms": 10000, 00:35:54.833 "arbitration_burst": 0, 00:35:54.833 "low_priority_weight": 0, 00:35:54.833 "medium_priority_weight": 0, 00:35:54.833 "high_priority_weight": 0, 00:35:54.833 "nvme_adminq_poll_period_us": 10000, 00:35:54.833 "nvme_ioq_poll_period_us": 0, 00:35:54.833 "io_queue_requests": 512, 00:35:54.833 "delay_cmd_submit": true, 00:35:54.833 "transport_retry_count": 4, 00:35:54.833 "bdev_retry_count": 3, 00:35:54.833 "transport_ack_timeout": 0, 00:35:54.833 "ctrlr_loss_timeout_sec": 0, 00:35:54.833 "reconnect_delay_sec": 0, 00:35:54.833 "fast_io_fail_timeout_sec": 0, 00:35:54.833 "disable_auto_failback": false, 00:35:54.833 "generate_uuids": false, 00:35:54.833 "transport_tos": 0, 00:35:54.833 "nvme_error_stat": false, 00:35:54.833 "rdma_srq_size": 0, 00:35:54.833 "io_path_stat": false, 00:35:54.833 "allow_accel_sequence": false, 00:35:54.833 "rdma_max_cq_size": 0, 00:35:54.833 "rdma_cm_event_timeout_ms": 0, 00:35:54.833 "dhchap_digests": [ 00:35:54.833 "sha256", 00:35:54.833 "sha384", 00:35:54.833 "sha512" 00:35:54.833 ], 00:35:54.833 "dhchap_dhgroups": [ 00:35:54.833 "null", 00:35:54.833 "ffdhe2048", 00:35:54.833 "ffdhe3072", 00:35:54.833 "ffdhe4096", 00:35:54.833 "ffdhe6144", 00:35:54.833 "ffdhe8192" 00:35:54.833 ] 00:35:54.833 } 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "method": "bdev_nvme_attach_controller", 00:35:54.833 "params": { 00:35:54.833 "name": "nvme0", 00:35:54.833 "trtype": "TCP", 00:35:54.833 "adrfam": "IPv4", 00:35:54.833 "traddr": "127.0.0.1", 00:35:54.833 "trsvcid": "4420", 00:35:54.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.833 "prchk_reftag": false, 00:35:54.833 "prchk_guard": false, 00:35:54.833 "ctrlr_loss_timeout_sec": 0, 00:35:54.833 "reconnect_delay_sec": 0, 00:35:54.833 "fast_io_fail_timeout_sec": 0, 00:35:54.833 "psk": "key0", 00:35:54.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.833 "hdgst": false, 00:35:54.833 "ddgst": false, 00:35:54.833 "multipath": "multipath" 00:35:54.833 } 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "method": "bdev_nvme_set_hotplug", 00:35:54.833 "params": { 00:35:54.833 "period_us": 100000, 00:35:54.833 "enable": false 00:35:54.833 } 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "method": "bdev_wait_for_examine" 00:35:54.833 } 00:35:54.833 ] 00:35:54.833 }, 00:35:54.833 { 00:35:54.833 "subsystem": "nbd", 00:35:54.833 "config": [] 00:35:54.833 } 00:35:54.833 ] 00:35:54.833 }' 00:35:54.833 09:12:07 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:54.833 09:12:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:54.833 [2024-11-06 09:12:08.026320] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:35:54.833 [2024-11-06 09:12:08.026412] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019315 ] 00:35:54.833 [2024-11-06 09:12:08.092987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.092 [2024-11-06 09:12:08.154252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.092 [2024-11-06 09:12:08.339380] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:55.350 09:12:08 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:55.350 09:12:08 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:55.350 09:12:08 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:55.350 09:12:08 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:55.350 09:12:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.607 09:12:08 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:55.607 09:12:08 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:55.607 09:12:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:55.607 09:12:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.607 09:12:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.607 09:12:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.607 09:12:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.865 09:12:09 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:55.865 09:12:09 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:55.865 09:12:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:55.865 09:12:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.865 09:12:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.865 09:12:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.865 09:12:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:56.122 09:12:09 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:56.122 09:12:09 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:56.122 09:12:09 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:56.122 09:12:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:56.380 09:12:09 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:56.380 09:12:09 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:56.380 09:12:09 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TvyHOqoYDe /tmp/tmp.QfjppVQbYI 00:35:56.380 09:12:09 keyring_file -- keyring/file.sh@20 -- # killprocess 1019315 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1019315 ']' 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1019315 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1019315 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1019315' 00:35:56.380 killing process with pid 1019315 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@969 -- # kill 1019315 00:35:56.380 Received shutdown signal, test time was about 1.000000 seconds 00:35:56.380 00:35:56.380 Latency(us) 00:35:56.380 [2024-11-06T08:12:09.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.380 [2024-11-06T08:12:09.669Z] =================================================================================================================== 00:35:56.380 [2024-11-06T08:12:09.669Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:56.380 09:12:09 keyring_file -- common/autotest_common.sh@974 -- # wait 1019315 00:35:56.637 09:12:09 keyring_file -- keyring/file.sh@21 -- # killprocess 1017294 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1017294 ']' 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1017294 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@955 -- # uname 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1017294 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1017294' 00:35:56.637 killing process with pid 1017294 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@969 -- # kill 1017294 00:35:56.637 09:12:09 keyring_file -- common/autotest_common.sh@974 -- # wait 1017294 00:35:57.202 00:35:57.202 real 0m14.714s 00:35:57.202 user 0m37.407s 00:35:57.202 sys 0m3.267s 00:35:57.202 09:12:10 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:57.202 09:12:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.202 ************************************ 00:35:57.202 END TEST keyring_file 00:35:57.202 ************************************ 00:35:57.202 09:12:10 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:35:57.202 09:12:10 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:57.202 09:12:10 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:35:57.202 09:12:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:57.202 09:12:10 -- common/autotest_common.sh@10 -- # set +x 00:35:57.202 ************************************ 00:35:57.202 START TEST keyring_linux 00:35:57.202 ************************************ 00:35:57.202 09:12:10 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:57.202 Joined session keyring: 123697821 00:35:57.202 * Looking for test storage... 00:35:57.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:57.202 09:12:10 keyring_linux -- common/autotest_common.sh@1688 -- # [[ y == y ]] 00:35:57.202 09:12:10 keyring_linux -- common/autotest_common.sh@1689 -- # lcov --version 00:35:57.202 09:12:10 keyring_linux -- common/autotest_common.sh@1689 -- # awk '{print $NF}' 00:35:57.460 09:12:10 keyring_linux -- common/autotest_common.sh@1689 -- # lt 1.15 2 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:57.461 09:12:10 keyring_linux -- common/autotest_common.sh@1690 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:57.461 09:12:10 keyring_linux -- common/autotest_common.sh@1702 -- # export 'LCOV_OPTS= 00:35:57.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.461 --rc genhtml_branch_coverage=1 00:35:57.461 --rc genhtml_function_coverage=1 00:35:57.461 --rc genhtml_legend=1 00:35:57.461 --rc geninfo_all_blocks=1 00:35:57.461 --rc geninfo_unexecuted_blocks=1 00:35:57.461 00:35:57.461 ' 00:35:57.461 09:12:10 keyring_linux -- common/autotest_common.sh@1702 -- # LCOV_OPTS=' 00:35:57.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.461 --rc genhtml_branch_coverage=1 00:35:57.461 --rc genhtml_function_coverage=1 00:35:57.461 --rc genhtml_legend=1 00:35:57.461 --rc geninfo_all_blocks=1 00:35:57.461 --rc geninfo_unexecuted_blocks=1 00:35:57.461 00:35:57.461 ' 00:35:57.461 09:12:10 keyring_linux -- common/autotest_common.sh@1703 -- # export 'LCOV=lcov 00:35:57.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.461 --rc genhtml_branch_coverage=1 00:35:57.461 --rc genhtml_function_coverage=1 00:35:57.461 --rc genhtml_legend=1 00:35:57.461 --rc geninfo_all_blocks=1 00:35:57.461 --rc geninfo_unexecuted_blocks=1 00:35:57.461 00:35:57.461 ' 00:35:57.461 09:12:10 keyring_linux -- common/autotest_common.sh@1703 -- # LCOV='lcov 00:35:57.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.461 --rc genhtml_branch_coverage=1 00:35:57.461 --rc genhtml_function_coverage=1 00:35:57.461 --rc genhtml_legend=1 00:35:57.461 --rc geninfo_all_blocks=1 00:35:57.461 --rc geninfo_unexecuted_blocks=1 00:35:57.461 00:35:57.461 ' 00:35:57.461 09:12:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:57.461 09:12:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.461 09:12:10 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.461 09:12:10 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.461 09:12:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.461 09:12:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.461 09:12:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.461 09:12:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:57.462 09:12:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:57.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@731 -- # python - 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:57.462 /tmp/:spdk-test:key0 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:35:57.462 09:12:10 keyring_linux -- nvmf/common.sh@731 -- # python - 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:57.462 09:12:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:57.462 /tmp/:spdk-test:key1 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1019881 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:57.462 09:12:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1019881 00:35:57.462 09:12:10 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1019881 ']' 00:35:57.462 09:12:10 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.462 09:12:10 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:57.462 09:12:10 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.462 09:12:10 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:57.462 09:12:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.462 [2024-11-06 09:12:10.681322] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:35:57.462 [2024-11-06 09:12:10.681423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019881 ] 00:35:57.462 [2024-11-06 09:12:10.744657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.720 [2024-11-06 09:12:10.799226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:57.978 09:12:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.978 [2024-11-06 09:12:11.066986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.978 null0 00:35:57.978 [2024-11-06 09:12:11.099047] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:57.978 [2024-11-06 09:12:11.099567] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.978 09:12:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:57.978 765380005 00:35:57.978 09:12:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:57.978 1006819251 00:35:57.978 09:12:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1019888 00:35:57.978 09:12:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:57.978 09:12:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1019888 /var/tmp/bperf.sock 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1019888 ']' 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:57.978 09:12:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.978 [2024-11-06 09:12:11.165777] Starting SPDK v25.01-pre git sha1 481542548 / DPDK 24.03.0 initialization... 00:35:57.978 [2024-11-06 09:12:11.165878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019888 ] 00:35:57.978 [2024-11-06 09:12:11.230636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.235 [2024-11-06 09:12:11.290345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.235 09:12:11 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:58.235 09:12:11 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:35:58.235 09:12:11 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:58.235 09:12:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:58.492 09:12:11 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:58.492 09:12:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:58.750 09:12:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:58.750 09:12:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:59.007 [2024-11-06 09:12:12.265032] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:59.265 nvme0n1 00:35:59.265 09:12:12 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:59.265 09:12:12 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:59.265 09:12:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:59.265 09:12:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:59.265 09:12:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:59.265 09:12:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.523 09:12:12 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:59.523 09:12:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:59.523 09:12:12 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:59.523 09:12:12 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:59.523 09:12:12 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.523 09:12:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.523 09:12:12 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:59.780 09:12:12 keyring_linux -- keyring/linux.sh@25 -- # sn=765380005 00:35:59.780 09:12:12 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:59.780 09:12:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:59.780 09:12:12 keyring_linux -- keyring/linux.sh@26 -- # [[ 765380005 == \7\6\5\3\8\0\0\0\5 ]] 00:35:59.780 09:12:12 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 765380005 00:35:59.780 09:12:12 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:59.780 09:12:12 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:59.780 Running I/O for 1 seconds... 00:36:00.786 11054.00 IOPS, 43.18 MiB/s 00:36:00.786 Latency(us) 00:36:00.786 [2024-11-06T08:12:14.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.786 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:00.786 nvme0n1 : 1.01 11065.24 43.22 0.00 0.00 11499.19 8107.05 19612.25 00:36:00.786 [2024-11-06T08:12:14.075Z] =================================================================================================================== 00:36:00.786 [2024-11-06T08:12:14.075Z] Total : 11065.24 43.22 0.00 0.00 11499.19 8107.05 19612.25 00:36:00.786 { 00:36:00.786 "results": [ 00:36:00.786 { 00:36:00.786 "job": "nvme0n1", 00:36:00.786 "core_mask": "0x2", 00:36:00.786 "workload": "randread", 00:36:00.786 "status": "finished", 00:36:00.786 "queue_depth": 128, 00:36:00.786 "io_size": 4096, 00:36:00.786 "runtime": 1.010642, 00:36:00.786 "iops": 11065.243676791584, 00:36:00.786 "mibps": 43.223608112467126, 00:36:00.786 "io_failed": 0, 00:36:00.786 "io_timeout": 0, 00:36:00.786 "avg_latency_us": 11499.194460374709, 00:36:00.786 "min_latency_us": 8107.045925925926, 00:36:00.786 "max_latency_us": 19612.254814814816 00:36:00.786 } 00:36:00.786 ], 00:36:00.786 "core_count": 1 00:36:00.786 } 00:36:00.786 09:12:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:00.786 09:12:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:01.044 09:12:14 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:01.044 09:12:14 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:01.044 09:12:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:01.044 09:12:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:01.044 09:12:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:01.044 09:12:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:01.610 09:12:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:01.610 [2024-11-06 09:12:14.862961] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:01.610 [2024-11-06 09:12:14.863073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6740a0 (107): Transport endpoint is not connected 00:36:01.610 [2024-11-06 09:12:14.864065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6740a0 (9): Bad file descriptor 00:36:01.610 [2024-11-06 09:12:14.865064] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:01.610 [2024-11-06 09:12:14.865086] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:01.610 [2024-11-06 09:12:14.865101] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:01.610 [2024-11-06 09:12:14.865135] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:01.610 request: 00:36:01.610 { 00:36:01.610 "name": "nvme0", 00:36:01.610 "trtype": "tcp", 00:36:01.610 "traddr": "127.0.0.1", 00:36:01.610 "adrfam": "ipv4", 00:36:01.610 "trsvcid": "4420", 00:36:01.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:01.610 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:01.610 "prchk_reftag": false, 00:36:01.610 "prchk_guard": false, 00:36:01.610 "hdgst": false, 00:36:01.610 "ddgst": false, 00:36:01.610 "psk": ":spdk-test:key1", 00:36:01.610 "allow_unrecognized_csi": false, 00:36:01.610 "method": "bdev_nvme_attach_controller", 00:36:01.610 "req_id": 1 00:36:01.610 } 00:36:01.610 Got JSON-RPC error response 00:36:01.610 response: 00:36:01.610 { 00:36:01.610 "code": -5, 00:36:01.610 "message": "Input/output error" 00:36:01.610 } 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@33 -- # sn=765380005 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 765380005 00:36:01.610 1 links removed 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@33 -- # sn=1006819251 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1006819251 00:36:01.610 1 links removed 00:36:01.610 09:12:14 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1019888 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1019888 ']' 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1019888 00:36:01.610 09:12:14 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:01.868 09:12:14 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:01.868 09:12:14 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1019888 00:36:01.868 09:12:14 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:01.868 09:12:14 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:01.868 09:12:14 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1019888' 00:36:01.868 killing process with pid 1019888 00:36:01.868 09:12:14 keyring_linux -- common/autotest_common.sh@969 -- # kill 1019888 00:36:01.868 Received shutdown signal, test time was about 1.000000 seconds 00:36:01.868 00:36:01.868 Latency(us) 00:36:01.868 [2024-11-06T08:12:15.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.868 [2024-11-06T08:12:15.157Z] =================================================================================================================== 00:36:01.868 [2024-11-06T08:12:15.157Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:01.868 09:12:14 keyring_linux -- common/autotest_common.sh@974 -- # wait 1019888 00:36:01.868 09:12:15 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1019881 00:36:01.868 09:12:15 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1019881 ']' 00:36:01.868 09:12:15 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1019881 00:36:01.868 09:12:15 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:01.868 09:12:15 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:01.868 09:12:15 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1019881 00:36:02.126 09:12:15 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:02.126 09:12:15 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:02.126 09:12:15 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1019881' 00:36:02.126 killing process with pid 1019881 00:36:02.126 09:12:15 keyring_linux -- common/autotest_common.sh@969 -- # kill 1019881 00:36:02.126 09:12:15 keyring_linux -- common/autotest_common.sh@974 -- # wait 1019881 00:36:02.386 00:36:02.386 real 0m5.234s 00:36:02.386 user 0m10.337s 00:36:02.386 sys 0m1.646s 00:36:02.386 09:12:15 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:02.386 09:12:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:02.386 ************************************ 00:36:02.386 END TEST keyring_linux 00:36:02.386 ************************************ 00:36:02.386 09:12:15 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:02.386 09:12:15 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:36:02.386 09:12:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:02.386 09:12:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:02.386 09:12:15 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:36:02.386 09:12:15 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:36:02.386 09:12:15 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:36:02.386 09:12:15 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:02.386 09:12:15 -- common/autotest_common.sh@10 -- # set +x 00:36:02.386 09:12:15 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:36:02.386 09:12:15 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:02.386 09:12:15 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:02.386 09:12:15 -- common/autotest_common.sh@10 -- # set +x 00:36:04.291 INFO: APP EXITING 00:36:04.291 INFO: killing all VMs 00:36:04.291 INFO: killing vhost app 00:36:04.291 INFO: EXIT DONE 00:36:05.667 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:05.667 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:05.667 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:05.667 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:05.667 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:05.667 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:05.667 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:05.667 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:05.667 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:36:05.667 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:05.667 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:05.667 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:05.667 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:05.667 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:05.667 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:05.667 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:05.667 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:07.043 Cleaning 00:36:07.043 Removing: /var/run/dpdk/spdk0/config 00:36:07.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:07.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:07.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:07.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:07.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:07.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:07.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:07.043 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:07.043 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:07.043 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:07.043 Removing: /var/run/dpdk/spdk1/config 00:36:07.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:07.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:07.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:07.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:07.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:07.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:07.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:07.043 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:07.043 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:07.043 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:07.043 Removing: /var/run/dpdk/spdk2/config 00:36:07.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:07.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:07.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:07.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:07.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:07.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:07.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:07.043 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:07.043 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:07.043 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:07.043 Removing: /var/run/dpdk/spdk3/config 00:36:07.043 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:07.043 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:07.043 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:07.043 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:07.043 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:07.043 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:07.043 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:07.043 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:07.043 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:07.043 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:07.043 Removing: /var/run/dpdk/spdk4/config 00:36:07.043 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:07.043 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:07.043 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:07.043 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:07.043 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:07.043 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:07.043 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:07.043 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:07.043 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:07.043 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:07.043 Removing: /dev/shm/bdev_svc_trace.1 00:36:07.043 Removing: /dev/shm/nvmf_trace.0 00:36:07.043 Removing: /dev/shm/spdk_tgt_trace.pid700579 00:36:07.043 Removing: /var/run/dpdk/spdk0 00:36:07.043 Removing: /var/run/dpdk/spdk1 00:36:07.043 Removing: /var/run/dpdk/spdk2 00:36:07.043 Removing: /var/run/dpdk/spdk3 00:36:07.043 Removing: /var/run/dpdk/spdk4 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1000486 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1001884 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1003400 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1004151 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1005552 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1006431 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1011829 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1012220 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1012608 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1014166 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1014566 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1014845 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1017294 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1017309 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1019315 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1019881 00:36:07.043 Removing: /var/run/dpdk/spdk_pid1019888 00:36:07.043 Removing: /var/run/dpdk/spdk_pid699014 00:36:07.043 Removing: /var/run/dpdk/spdk_pid699756 00:36:07.043 Removing: /var/run/dpdk/spdk_pid700579 00:36:07.043 Removing: /var/run/dpdk/spdk_pid701026 00:36:07.043 Removing: /var/run/dpdk/spdk_pid701719 00:36:07.043 Removing: /var/run/dpdk/spdk_pid701876 00:36:07.043 Removing: /var/run/dpdk/spdk_pid702588 00:36:07.043 Removing: /var/run/dpdk/spdk_pid702687 00:36:07.043 Removing: /var/run/dpdk/spdk_pid702968 00:36:07.043 Removing: /var/run/dpdk/spdk_pid704187 00:36:07.043 Removing: /var/run/dpdk/spdk_pid705102 00:36:07.043 Removing: /var/run/dpdk/spdk_pid705415 00:36:07.043 Removing: /var/run/dpdk/spdk_pid705615 00:36:07.043 Removing: /var/run/dpdk/spdk_pid705833 00:36:07.043 Removing: /var/run/dpdk/spdk_pid706032 00:36:07.043 Removing: /var/run/dpdk/spdk_pid706236 00:36:07.043 Removing: /var/run/dpdk/spdk_pid706456 00:36:07.043 Removing: /var/run/dpdk/spdk_pid706656 00:36:07.043 Removing: /var/run/dpdk/spdk_pid706855 00:36:07.043 Removing: /var/run/dpdk/spdk_pid709339 00:36:07.043 Removing: /var/run/dpdk/spdk_pid709597 00:36:07.043 Removing: /var/run/dpdk/spdk_pid709784 00:36:07.043 Removing: /var/run/dpdk/spdk_pid709789 00:36:07.043 Removing: /var/run/dpdk/spdk_pid710113 00:36:07.043 Removing: /var/run/dpdk/spdk_pid710221 00:36:07.043 Removing: /var/run/dpdk/spdk_pid710537 00:36:07.043 Removing: /var/run/dpdk/spdk_pid710661 00:36:07.302 Removing: /var/run/dpdk/spdk_pid710833 00:36:07.302 Removing: /var/run/dpdk/spdk_pid710961 00:36:07.302 Removing: /var/run/dpdk/spdk_pid711123 00:36:07.302 Removing: /var/run/dpdk/spdk_pid711134 00:36:07.302 Removing: /var/run/dpdk/spdk_pid711630 00:36:07.302 Removing: /var/run/dpdk/spdk_pid711784 00:36:07.302 Removing: /var/run/dpdk/spdk_pid711989 00:36:07.302 Removing: /var/run/dpdk/spdk_pid714154 00:36:07.302 Removing: /var/run/dpdk/spdk_pid716746 00:36:07.302 Removing: /var/run/dpdk/spdk_pid724479 00:36:07.302 Removing: /var/run/dpdk/spdk_pid724893 00:36:07.302 Removing: /var/run/dpdk/spdk_pid727408 00:36:07.302 Removing: /var/run/dpdk/spdk_pid727573 00:36:07.302 Removing: /var/run/dpdk/spdk_pid730223 00:36:07.302 Removing: /var/run/dpdk/spdk_pid733952 00:36:07.302 Removing: /var/run/dpdk/spdk_pid736139 00:36:07.302 Removing: /var/run/dpdk/spdk_pid742569 00:36:07.302 Removing: /var/run/dpdk/spdk_pid747805 00:36:07.302 Removing: /var/run/dpdk/spdk_pid749124 00:36:07.302 Removing: /var/run/dpdk/spdk_pid749788 00:36:07.302 Removing: /var/run/dpdk/spdk_pid760686 00:36:07.302 Removing: /var/run/dpdk/spdk_pid763089 00:36:07.302 Removing: /var/run/dpdk/spdk_pid790367 00:36:07.302 Removing: /var/run/dpdk/spdk_pid793782 00:36:07.302 Removing: /var/run/dpdk/spdk_pid798116 00:36:07.302 Removing: /var/run/dpdk/spdk_pid802513 00:36:07.302 Removing: /var/run/dpdk/spdk_pid802515 00:36:07.302 Removing: /var/run/dpdk/spdk_pid803156 00:36:07.302 Removing: /var/run/dpdk/spdk_pid803717 00:36:07.302 Removing: /var/run/dpdk/spdk_pid804367 00:36:07.302 Removing: /var/run/dpdk/spdk_pid804766 00:36:07.302 Removing: /var/run/dpdk/spdk_pid804775 00:36:07.302 Removing: /var/run/dpdk/spdk_pid805032 00:36:07.302 Removing: /var/run/dpdk/spdk_pid805060 00:36:07.302 Removing: /var/run/dpdk/spdk_pid805171 00:36:07.302 Removing: /var/run/dpdk/spdk_pid805717 00:36:07.302 Removing: /var/run/dpdk/spdk_pid806369 00:36:07.302 Removing: /var/run/dpdk/spdk_pid807030 00:36:07.302 Removing: /var/run/dpdk/spdk_pid807435 00:36:07.302 Removing: /var/run/dpdk/spdk_pid807438 00:36:07.302 Removing: /var/run/dpdk/spdk_pid807699 00:36:07.302 Removing: /var/run/dpdk/spdk_pid808594 00:36:07.302 Removing: /var/run/dpdk/spdk_pid809317 00:36:07.302 Removing: /var/run/dpdk/spdk_pid814673 00:36:07.302 Removing: /var/run/dpdk/spdk_pid842643 00:36:07.302 Removing: /var/run/dpdk/spdk_pid845686 00:36:07.302 Removing: /var/run/dpdk/spdk_pid847371 00:36:07.302 Removing: /var/run/dpdk/spdk_pid848695 00:36:07.302 Removing: /var/run/dpdk/spdk_pid848835 00:36:07.302 Removing: /var/run/dpdk/spdk_pid848973 00:36:07.302 Removing: /var/run/dpdk/spdk_pid849114 00:36:07.302 Removing: /var/run/dpdk/spdk_pid849559 00:36:07.302 Removing: /var/run/dpdk/spdk_pid850881 00:36:07.302 Removing: /var/run/dpdk/spdk_pid851735 00:36:07.302 Removing: /var/run/dpdk/spdk_pid852166 00:36:07.302 Removing: /var/run/dpdk/spdk_pid853806 00:36:07.302 Removing: /var/run/dpdk/spdk_pid854226 00:36:07.302 Removing: /var/run/dpdk/spdk_pid854672 00:36:07.302 Removing: /var/run/dpdk/spdk_pid857064 00:36:07.302 Removing: /var/run/dpdk/spdk_pid860464 00:36:07.302 Removing: /var/run/dpdk/spdk_pid860465 00:36:07.302 Removing: /var/run/dpdk/spdk_pid860466 00:36:07.302 Removing: /var/run/dpdk/spdk_pid862689 00:36:07.302 Removing: /var/run/dpdk/spdk_pid867417 00:36:07.302 Removing: /var/run/dpdk/spdk_pid870081 00:36:07.302 Removing: /var/run/dpdk/spdk_pid873838 00:36:07.302 Removing: /var/run/dpdk/spdk_pid874904 00:36:07.302 Removing: /var/run/dpdk/spdk_pid875874 00:36:07.302 Removing: /var/run/dpdk/spdk_pid877075 00:36:07.302 Removing: /var/run/dpdk/spdk_pid880408 00:36:07.302 Removing: /var/run/dpdk/spdk_pid882772 00:36:07.302 Removing: /var/run/dpdk/spdk_pid887014 00:36:07.302 Removing: /var/run/dpdk/spdk_pid887132 00:36:07.302 Removing: /var/run/dpdk/spdk_pid889919 00:36:07.302 Removing: /var/run/dpdk/spdk_pid890054 00:36:07.302 Removing: /var/run/dpdk/spdk_pid890190 00:36:07.302 Removing: /var/run/dpdk/spdk_pid890472 00:36:07.302 Removing: /var/run/dpdk/spdk_pid890577 00:36:07.302 Removing: /var/run/dpdk/spdk_pid893283 00:36:07.302 Removing: /var/run/dpdk/spdk_pid893687 00:36:07.302 Removing: /var/run/dpdk/spdk_pid896354 00:36:07.302 Removing: /var/run/dpdk/spdk_pid898212 00:36:07.302 Removing: /var/run/dpdk/spdk_pid901649 00:36:07.302 Removing: /var/run/dpdk/spdk_pid905222 00:36:07.302 Removing: /var/run/dpdk/spdk_pid911729 00:36:07.302 Removing: /var/run/dpdk/spdk_pid916836 00:36:07.302 Removing: /var/run/dpdk/spdk_pid916838 00:36:07.302 Removing: /var/run/dpdk/spdk_pid929238 00:36:07.302 Removing: /var/run/dpdk/spdk_pid929740 00:36:07.302 Removing: /var/run/dpdk/spdk_pid930152 00:36:07.302 Removing: /var/run/dpdk/spdk_pid930676 00:36:07.302 Removing: /var/run/dpdk/spdk_pid931256 00:36:07.302 Removing: /var/run/dpdk/spdk_pid931663 00:36:07.302 Removing: /var/run/dpdk/spdk_pid932073 00:36:07.302 Removing: /var/run/dpdk/spdk_pid932483 00:36:07.302 Removing: /var/run/dpdk/spdk_pid934990 00:36:07.302 Removing: /var/run/dpdk/spdk_pid935133 00:36:07.302 Removing: /var/run/dpdk/spdk_pid938935 00:36:07.302 Removing: /var/run/dpdk/spdk_pid939110 00:36:07.302 Removing: /var/run/dpdk/spdk_pid942474 00:36:07.302 Removing: /var/run/dpdk/spdk_pid945053 00:36:07.302 Removing: /var/run/dpdk/spdk_pid952496 00:36:07.302 Removing: /var/run/dpdk/spdk_pid953017 00:36:07.302 Removing: /var/run/dpdk/spdk_pid955406 00:36:07.302 Removing: /var/run/dpdk/spdk_pid955679 00:36:07.302 Removing: /var/run/dpdk/spdk_pid958194 00:36:07.302 Removing: /var/run/dpdk/spdk_pid962005 00:36:07.562 Removing: /var/run/dpdk/spdk_pid964060 00:36:07.562 Removing: /var/run/dpdk/spdk_pid970422 00:36:07.562 Removing: /var/run/dpdk/spdk_pid975636 00:36:07.562 Removing: /var/run/dpdk/spdk_pid976936 00:36:07.562 Removing: /var/run/dpdk/spdk_pid977600 00:36:07.562 Removing: /var/run/dpdk/spdk_pid988312 00:36:07.562 Removing: /var/run/dpdk/spdk_pid990535 00:36:07.562 Removing: /var/run/dpdk/spdk_pid992534 00:36:07.562 Removing: /var/run/dpdk/spdk_pid997579 00:36:07.562 Removing: /var/run/dpdk/spdk_pid997584 00:36:07.562 Clean 00:36:07.562 09:12:20 -- common/autotest_common.sh@1449 -- # return 0 00:36:07.562 09:12:20 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:36:07.562 09:12:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:07.562 09:12:20 -- common/autotest_common.sh@10 -- # set +x 00:36:07.562 09:12:20 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:36:07.562 09:12:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:07.562 09:12:20 -- common/autotest_common.sh@10 -- # set +x 00:36:07.562 09:12:20 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:07.562 09:12:20 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:07.562 09:12:20 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:07.562 09:12:20 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:36:07.562 09:12:20 -- spdk/autotest.sh@394 -- # hostname 00:36:07.562 09:12:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:07.820 geninfo: WARNING: invalid characters removed from testname! 00:36:39.902 09:12:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:41.812 09:12:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:45.107 09:12:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:48.437 09:13:01 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:50.976 09:13:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:54.270 09:13:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:56.810 09:13:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:56.810 09:13:10 -- common/autotest_common.sh@1688 -- $ [[ y == y ]] 00:36:56.810 09:13:10 -- common/autotest_common.sh@1689 -- $ lcov --version 00:36:56.810 09:13:10 -- common/autotest_common.sh@1689 -- $ awk '{print $NF}' 00:36:57.069 09:13:10 -- common/autotest_common.sh@1689 -- $ lt 1.15 2 00:36:57.069 09:13:10 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:36:57.069 09:13:10 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:36:57.069 09:13:10 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:36:57.069 09:13:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:36:57.069 09:13:10 -- scripts/common.sh@336 -- $ read -ra ver1 00:36:57.069 09:13:10 -- scripts/common.sh@337 -- $ IFS=.-: 00:36:57.069 09:13:10 -- scripts/common.sh@337 -- $ read -ra ver2 00:36:57.069 09:13:10 -- scripts/common.sh@338 -- $ local 'op=<' 00:36:57.069 09:13:10 -- scripts/common.sh@340 -- $ ver1_l=2 00:36:57.069 09:13:10 -- scripts/common.sh@341 -- $ ver2_l=1 00:36:57.069 09:13:10 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:36:57.069 09:13:10 -- scripts/common.sh@344 -- $ case "$op" in 00:36:57.069 09:13:10 -- scripts/common.sh@345 -- $ : 1 00:36:57.069 09:13:10 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:36:57.069 09:13:10 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:57.069 09:13:10 -- scripts/common.sh@365 -- $ decimal 1 00:36:57.069 09:13:10 -- scripts/common.sh@353 -- $ local d=1 00:36:57.069 09:13:10 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:36:57.069 09:13:10 -- scripts/common.sh@355 -- $ echo 1 00:36:57.069 09:13:10 -- scripts/common.sh@365 -- $ ver1[v]=1 00:36:57.069 09:13:10 -- scripts/common.sh@366 -- $ decimal 2 00:36:57.069 09:13:10 -- scripts/common.sh@353 -- $ local d=2 00:36:57.069 09:13:10 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:36:57.069 09:13:10 -- scripts/common.sh@355 -- $ echo 2 00:36:57.069 09:13:10 -- scripts/common.sh@366 -- $ ver2[v]=2 00:36:57.069 09:13:10 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:36:57.069 09:13:10 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:36:57.069 09:13:10 -- scripts/common.sh@368 -- $ return 0 00:36:57.069 09:13:10 -- common/autotest_common.sh@1690 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:57.069 09:13:10 -- common/autotest_common.sh@1702 -- $ export 'LCOV_OPTS= 00:36:57.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.069 --rc genhtml_branch_coverage=1 00:36:57.069 --rc genhtml_function_coverage=1 00:36:57.069 --rc genhtml_legend=1 00:36:57.069 --rc geninfo_all_blocks=1 00:36:57.069 --rc geninfo_unexecuted_blocks=1 00:36:57.069 00:36:57.069 ' 00:36:57.069 09:13:10 -- common/autotest_common.sh@1702 -- $ LCOV_OPTS=' 00:36:57.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.069 --rc genhtml_branch_coverage=1 00:36:57.069 --rc genhtml_function_coverage=1 00:36:57.069 --rc genhtml_legend=1 00:36:57.069 --rc geninfo_all_blocks=1 00:36:57.069 --rc geninfo_unexecuted_blocks=1 00:36:57.069 00:36:57.069 ' 00:36:57.069 09:13:10 -- common/autotest_common.sh@1703 -- $ export 'LCOV=lcov 00:36:57.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.069 --rc genhtml_branch_coverage=1 00:36:57.069 --rc genhtml_function_coverage=1 00:36:57.069 --rc genhtml_legend=1 00:36:57.069 --rc geninfo_all_blocks=1 00:36:57.069 --rc geninfo_unexecuted_blocks=1 00:36:57.069 00:36:57.069 ' 00:36:57.069 09:13:10 -- common/autotest_common.sh@1703 -- $ LCOV='lcov 00:36:57.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.069 --rc genhtml_branch_coverage=1 00:36:57.069 --rc genhtml_function_coverage=1 00:36:57.069 --rc genhtml_legend=1 00:36:57.069 --rc geninfo_all_blocks=1 00:36:57.069 --rc geninfo_unexecuted_blocks=1 00:36:57.069 00:36:57.069 ' 00:36:57.069 09:13:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.070 09:13:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:36:57.070 09:13:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:57.070 09:13:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.070 09:13:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.070 09:13:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.070 09:13:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.070 09:13:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.070 09:13:10 -- paths/export.sh@5 -- $ export PATH 00:36:57.070 09:13:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.070 09:13:10 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:57.070 09:13:10 -- common/autobuild_common.sh@486 -- $ date +%s 00:36:57.070 09:13:10 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730880790.XXXXXX 00:36:57.070 09:13:10 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730880790.vJDFpW 00:36:57.070 09:13:10 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:36:57.070 09:13:10 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:36:57.070 09:13:10 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:36:57.070 09:13:10 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:57.070 09:13:10 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:57.070 09:13:10 -- common/autobuild_common.sh@502 -- $ get_config_params 00:36:57.070 09:13:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:36:57.070 09:13:10 -- common/autotest_common.sh@10 -- $ set +x 00:36:57.070 09:13:10 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:36:57.070 09:13:10 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:36:57.070 09:13:10 -- pm/common@17 -- $ local monitor 00:36:57.070 09:13:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:57.070 09:13:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:57.070 09:13:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:57.070 09:13:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:57.070 09:13:10 -- pm/common@25 -- $ sleep 1 00:36:57.070 09:13:10 -- pm/common@21 -- $ date +%s 00:36:57.070 09:13:10 -- pm/common@21 -- $ date +%s 00:36:57.070 09:13:10 -- pm/common@21 -- $ date +%s 00:36:57.070 09:13:10 -- pm/common@21 -- $ date +%s 00:36:57.070 09:13:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730880790 00:36:57.070 09:13:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730880790 00:36:57.070 09:13:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730880790 00:36:57.070 09:13:10 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1730880790 00:36:57.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730880790_collect-vmstat.pm.log 00:36:57.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730880790_collect-cpu-load.pm.log 00:36:57.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730880790_collect-cpu-temp.pm.log 00:36:57.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1730880790_collect-bmc-pm.bmc.pm.log 00:36:58.007 09:13:11 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:36:58.007 09:13:11 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:36:58.007 09:13:11 -- spdk/autopackage.sh@14 -- $ timing_finish 00:36:58.007 09:13:11 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:58.007 09:13:11 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:58.007 09:13:11 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:58.007 09:13:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:58.007 09:13:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:58.007 09:13:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:58.007 09:13:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:58.007 09:13:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:58.007 09:13:11 -- pm/common@44 -- $ pid=1030551 00:36:58.007 09:13:11 -- pm/common@50 -- $ kill -TERM 1030551 00:36:58.007 09:13:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:58.007 09:13:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:58.007 09:13:11 -- pm/common@44 -- $ pid=1030553 00:36:58.007 09:13:11 -- pm/common@50 -- $ kill -TERM 1030553 00:36:58.007 09:13:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:58.007 09:13:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:58.007 09:13:11 -- pm/common@44 -- $ pid=1030555 00:36:58.007 09:13:11 -- pm/common@50 -- $ kill -TERM 1030555 00:36:58.007 09:13:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:58.007 09:13:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:58.007 09:13:11 -- pm/common@44 -- $ pid=1030583 00:36:58.007 09:13:11 -- pm/common@50 -- $ sudo -E kill -TERM 1030583 00:36:58.007 + [[ -n 628532 ]] 00:36:58.007 + sudo kill 628532 00:36:58.017 [Pipeline] } 00:36:58.033 [Pipeline] // stage 00:36:58.038 [Pipeline] } 00:36:58.052 [Pipeline] // timeout 00:36:58.057 [Pipeline] } 00:36:58.071 [Pipeline] // catchError 00:36:58.076 [Pipeline] } 00:36:58.090 [Pipeline] // wrap 00:36:58.096 [Pipeline] } 00:36:58.109 [Pipeline] // catchError 00:36:58.118 [Pipeline] stage 00:36:58.120 [Pipeline] { (Epilogue) 00:36:58.133 [Pipeline] catchError 00:36:58.135 [Pipeline] { 00:36:58.148 [Pipeline] echo 00:36:58.150 Cleanup processes 00:36:58.156 [Pipeline] sh 00:36:58.450 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:58.450 1030747 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:36:58.450 1030863 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:58.464 [Pipeline] sh 00:36:58.749 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:58.749 ++ grep -v 'sudo pgrep' 00:36:58.749 ++ awk '{print $1}' 00:36:58.749 + sudo kill -9 1030747 00:36:58.761 [Pipeline] sh 00:36:59.047 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:09.028 [Pipeline] sh 00:37:09.317 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:09.317 Artifacts sizes are good 00:37:09.331 [Pipeline] archiveArtifacts 00:37:09.338 Archiving artifacts 00:37:09.491 [Pipeline] sh 00:37:09.776 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:09.791 [Pipeline] cleanWs 00:37:09.803 [WS-CLEANUP] Deleting project workspace... 00:37:09.803 [WS-CLEANUP] Deferred wipeout is used... 00:37:09.810 [WS-CLEANUP] done 00:37:09.812 [Pipeline] } 00:37:09.831 [Pipeline] // catchError 00:37:09.843 [Pipeline] sh 00:37:10.180 + logger -p user.info -t JENKINS-CI 00:37:10.189 [Pipeline] } 00:37:10.203 [Pipeline] // stage 00:37:10.209 [Pipeline] } 00:37:10.224 [Pipeline] // node 00:37:10.231 [Pipeline] End of Pipeline 00:37:10.297 Finished: SUCCESS